Tags: #alignment

AI/ML Research Framework
Hugging Face
1.6k

PKU-Alignment/safe-rlhf

A modular open-source framework for training constrained value-aligned Large Language Models (LLMs) using Safe Reinforcement Learning from Human Feedback (RLHF).

OSS Alternative

Explore the best open source alternatives to commercial software.

© 2026 OSS Alternative. hotgithub.com - All rights reserved.