yangjianxin1/Firefly
Firefly is an open-source toolkit for efficient large language model training, supporting pre-training, instruction fine-tuning, and DPO with methods like LoRA and QLoRA.
Core Features
Detailed Introduction
Firefly is an open-source project designed to simplify and optimize the training of large language models. It provides a comprehensive toolkit supporting various training methodologies, including pre-training, instruction fine-tuning (SFT), and DPO. By integrating efficient techniques like LoRA, QLoRA, and Unsloth, Firefly significantly reduces the computational resources and time required for LLM development. It boasts broad compatibility with mainstream open-source models and has demonstrated strong performance on public leaderboards, making advanced LLM training more accessible to developers with limited resources.