Stop Thinking, Just Do!

Sungsoo Kim's Blog

Randomized Embeddings and Neural Networks

tagsTags

9 October 2023


Article Source


Randomized Embeddings and Neural Networks

Abstract

In this talk, we explore connections between random embeddings and neural networks through the lens of convex analysis. We first introduce exact convex formulations of neural network training problems. We show that rectified linear unit (ReLU) networks can be globally trained via convex programs with the number of variables polynomial in the number of training samples but exponential in the feature dimension. We show that the exponential dependence on the feature dimension can be reduced by a randomized zonotope vertex sampling algorithm. The analysis of this algorithm is tightly connected to randomized embeddings of l2 to l1, Dvoretzky’s theorem and hyperplane tessellations. Finally, we present numerical simulations verifying our claims and illustrating that the proposed approach is faster and more reliable than standard local search heuristics such as stochastic gradient descent and variants.


comments powered by Disqus