Stop Thinking, Just Do!

Sungsoo Kim's Blog

Knowledge Distillation - Build Smaller, Faster AI Models

tagsTags

3 August 2025


AWS AI and Data Conference 2025 – Knowledge Distillation: Build Smaller, Faster AI Models

Abstract

Knowledge distillation transfers capabilities from large language models to smaller, faster models while maintaining performance. Organizations can achieve dramatic improvements in throughput and cost efficiency. Learn how to implement distillation using Amazon Bedrock or to build a custom solution on Amazon SageMaker. Julien Simon will showcase how Arcee AI uses distillation to develop industry-leading small language models (SLMs) based on open architectures. He will also introduce the open-source DistillKit library and demonstrate several newly distilled SLMs from Arcee AI.

Speakers:

Laurens van der Maas, Machine Learning Engineer, AWS Aleksandra Dokic, Senior Data Scientist, AWS Jean Launay Orlanda, Engagement Manager, AWS

Learn more about AWS events: https://go.aws/events