Fine-Tuning LLMs with Declarative ML Orchestration

Join us for this event hosted by with Chief ML Engineer at, Niels Bantilan on fine-tuning LLMs with ML orchestration. Foundation LLMs are trained by a few organizations with massive compute resources. The ML community fine-tunes these models for specific uses, facing infrastructure challenges.

In this session, Niels will demonstrate how to use Flyte, a Linux Foundation open-source orchestration platform.

Flyte allows for the declarative specification of the infrastructure needed for a broad range of ML workloads, including fine-tuning LLMs with limited resources.

Key Takeaways

  • Efficiently fine-tune large language models with limited resources using Flyte.
  • Explore torchrun, LoRA, and FSDP for deep learning tasks.
  • Leverage Flyte's reproducibility and cost management features for ML workloads.