LLM Fine-tuning Tool
3.7k 2026-04-18

hiyouga/ChatGLM-Efficient-Tuning

An efficient toolkit for fine-tuning ChatGLM-6B models using PEFT methods, enabling customization and deployment of large language models.

Core Features

Efficient fine-tuning with PEFT (LoRA, QLoRA)
Support for ChatGLM-6B and ChatGLM2-6B models
Integrated Web UI for training, evaluation, and inference
Reinforcement Learning with Human Feedback (RLHF) training support
Alignment with OpenAI API format for fine-tuned models

Detailed Introduction

ChatGLM Efficient Tuning is a specialized toolkit designed to simplify and accelerate the fine-tuning process for ChatGLM-6B and ChatGLM2-6B large language models. By leveraging Parameter-Efficient Fine-Tuning (PEFT) techniques like LoRA and QLoRA, it significantly reduces computational resources required for customization. The project offers a comprehensive environment, including a web-based user interface for managing training, evaluation, and inference tasks, and supports advanced methods like RLHF. It aims to make LLM adaptation more accessible for developers and researchers, allowing them to integrate custom ChatGLM models into various applications, including those compatible with the OpenAI API format. Note that this project is no longer maintained, with development continuing in LLaMA-Factory.

OSS Alternative

Explore the best open source alternatives to commercial software.

© 2026 OSS Alternative. hotgithub.com - All rights reserved.