Stop Thinking, Just Do!

Sungsoo Kim's Blog

How to Train Your Own Large Language Models

tagsTags

18 January 2024


Article Source


How to Train Your Own Large Language Models

Abstract

Given the success of OpenAI’s GPT-4 and Google’s PaLM, every company is now assessing its own use cases for Large Language Models (LLMs). Many companies will ultimately decide to train their own LLMs for a variety of reasons, ranging from data privacy to increased control over updates and improvements. One of the most common reasons will be to make use of proprietary internal data.

In this session, we’ll go over how to train your own LLMs, from raw data to deployment in a user-facing production environment. We’ll discuss the engineering challenges, and the vendors that make up the modern LLM stack: Databricks, Hugging Face, and MosaicML. We’ll also break down what it means to train an LLM using your own data, including the various approaches and their associated tradeoffs.

Topics covered in this session:

  • How Replit trained a state-of-the-art LLM from scratch
  • The different approaches to using LLMs with your internal data
  • The differences between fine-tuning, instruction tuning, and RLHF

Talk by: Reza Shabani


comments powered by Disqus