Learning Resource
29.9k 2026-04-18
datawhalechina/self-llm
A comprehensive Linux-based tutorial for deploying and fine-tuning open-source LLMs/MLLMs, tailored for Chinese beginners.
Core Features
Linux-based environment configuration for diverse LLMs.
Deployment and usage guides for mainstream open-source LLMs (e.g., LLaMA, ChatGLM, InternLM).
Application guidance including CLI, online demo deployment, and LangChain integration.
Full-parameter and efficient fine-tuning methods (e.g., LoRA, P-tuning, distributed fine-tuning).
Supports over 50 mainstream large language models with complete tutorials.
Detailed Introduction
This project serves as a dedicated learning resource for beginners in China, focusing on open-source large language models (LLMs) and multimodal large language models (MLLMs) on Linux. It provides end-to-end guidance, from environment setup to local deployment and efficient fine-tuning, simplifying the complex process of using open-source LLMs. The initiative aims to empower more students and researchers to leverage the power of open-source AI, fostering its integration into daily learning and research.