LLM Fine-tuning Framework
11.7k 2026-04-15
axolotl-ai-cloud/axolotl
A free and open-source framework designed for efficient and flexible fine-tuning of large language models.
Core Features
Supports a wide range of LLM architectures (e.g., Mistral, Qwen, GLM, Kimi-Linear).
Integrates advanced fine-tuning techniques (e.g., LoRA, MoE expert quantization, GDPO, EAFT).
Optimized for distributed training (e.g., FSDP2, Distributed Muon Optimizer).
Improves long context handling (e.g., Scalable Softmax, SageAttention).
Detailed Introduction
Axolotl is an open-source framework dedicated to simplifying and accelerating the fine-tuning process for large language models. It provides a comprehensive toolkit for researchers and developers to adapt pre-trained LLMs to specific tasks or datasets. By supporting various models and integrating advanced optimization techniques like LoRA and MoE quantization, Axolotl enables efficient resource utilization and improved model performance, making advanced LLM customization accessible.