February 12 from 10am - 11am PT

It’s no exaggeration to say everyone is talking about DeepSeek-R1. As the first open-source model to close—and often exceed—the performance gap with top commercial models, DeepSeek-R1 has set a new standard. What makes DeepSeek-R1 compelling is its innovative approach to training, which utilizes advanced reinforcement learning techniques. This method diverges from traditional AI training, focusing on efficient learning processes rather than simply expanding datasets.

Since launching DeepSeek, the question we hear most often is: "Can I fine-tune the DeepSeek models?" Traditionally, fine-tuning reasoning models like DeepSeek-R1 and its distillations typically enters unexplored territory, lacking clear best practices or established methodologies. 

Recognizing this gap, we’ve developed a unique fine-tuning process that we're excited to share. This new approach allows you to customize reasoning models like DeepSeek-R1 distillations to fit your specific use cases and domains. We’re looking forward to sharing!

In this webinar, we’ll cover:

  1. How to fine-tune DeepSeek-R1-Qwen-7B: Practical steps and strategies for customization
  2. Performance benchmarks: Quantifying the impact of fine-tuning on reasoning tasks
  3. When to fine-tune a reasoning model: Guidance on when to fine-tune vs. when to stick with a standard SLM

Reserve your spot today!

Featured Speakers:

speaker-travis-addair

Travis Addair

CTO and Cofounder
Linkedin
speaker-arnav-garg-1

Arnav Garg

Machine Learning Lead
Linkedin

Ready to efficiently fine-tune and serve your own LLM?