Ecosystem & Stack: huggingface-transformers
huggingface/peft
PEFT is a state-of-the-art library for Parameter-Efficient Fine-Tuning, drastically reducing the computational and storage costs of adapting large pretrained models.
zyds/transformers-code
A comprehensive code repository accompanying a hands-on course for mastering Huggingface Transformers, covering fundamental concepts to advanced fine-tuning and deployment techniques.
adapter-hub/adapters
A unified library for parameter-efficient and modular transfer learning, extending HuggingFace Transformers with various adapter methods.
wenge-research/YAYI
YaYi is an open-source Chinese large language model series, built on LLaMA 2 & BLOOM, designed to provide secure, reliable, and domain-specific AI capabilities for enterprise customers through extensive multi-domain instruction tuning.
RLHFlow/RLHF-Reward-Modeling
A comprehensive collection of recipes and code for training various reward models crucial for Reinforcement Learning from Human Feedback (RLHF) in large language models.
dottxt-ai/outlines
Outlines guarantees structured outputs from Large Language Models during generation, eliminating post-processing headaches and ensuring data integrity.
lucidrains/imagen-pytorch
A PyTorch implementation of Google's Imagen, a state-of-the-art text-to-image neural network that surpasses DALL-E2 in synthesis quality.