Hamilton-Jacobi-Bellman (HJB) equation plays a central role in optimal control and differential games, enabling the computation of robust controls in feedback form. The main disadvantage for this approach depends on the so-called curse of dimensionality, since the HJB equation and the dynamical system live in the same, possibly high dimensional, space. In this talk, I will present a data-driven method for approximating high-dimensional HJB equations based on tensor decompositions. The approach presented in this talk is based on the knowledge of the value function and its gradient on sample points and on a tensor train decomposition of the value function. The collection of the data will be derived by two possible techniques: Pontryagin Maximum Principle and State-Dependent Riccati Equations. The numerical experiments will demonstrate an at most linear complexity in the dimension and a better stability in presence of noise. Moreover, I will present an application to an agent-based model and a comparison with Deep Learning techniques. Finally, time permitting, I will consider the coupling of the proposed method with Model Order Reduction techniques and their application to boundary feedback control for the Navier-Stokes equations.