Stop Thinking, Just Do!

Sungsoo Kim's Blog

Deep Networks Are Kernel Machines

tagsTags

13 March 2021


Article Source


Deep Networks Are Kernel Machines

Abstract

Deep learning’s successes are often attributed to its ability to automatically discover new representations of the data, rather than relying on handcrafted features like other learning methods. In this talk, however, Pedro Domingos will show that deep networks learned by the standard gradient descent algorithm are in fact mathematically approximately equivalent to kernel machines, a learning method that simply memorizes the data and uses it directly for prediction via a similarity function (the kernel). This greatly enhances the interpretability of deep network weights, by elucidating that they are effectively a superposition of the training examples. The network architecture incorporates knowledge of the target function into the kernel. The talk will include a discussion of both the main ideas behind this result and some of its more startling consequences for deep learning, kernel machines, and machine learning at large.

For his 2020 paper behind this talk, see: “Every Model Learned by Gradient Descent Is Approximately a Kernel Machine” at https://arxiv.org/abs/2012.00152

Pedro Domingos is a professor of computer science at the University of Washington and the author of “The Master Algorithm”, the worldwide bestseller introducing machine learning to a broad audience. He is a winner of the SIGKDD Innovation Award and the IJCAI John McCarthy Award, two of the highest honors in data science and AI, and a Fellow of the AAAS and AAAI. His research spans a wide variety of topics in machine learning, artificial intelligence, and data science. He helped start the fields of statistical relational AI, data stream mining, adversarial learning, machine learning for information integration, and influence maximization in social networks.


comments powered by Disqus