iusztinpaul/hands-on-llms
Learn to design, train, and deploy a real-time financial advisor LLM system using a hands-on, three-pipeline approach.
Core Features
Detailed Introduction
This project, 'Hands-on LLMs', serves as an educational course designed to teach the end-to-end process of building, training, and deploying a real-time financial advisor LLM system. It guides users through a three-pipeline architecture covering data ingestion, LLM fine-tuning using QLoRA, and inference with a Retrieval Augmented Generation (RAG) approach. The course leverages various modern MLOps tools and platforms like Comet ML for experiment tracking, Qdrant for vector storage, Beam for serverless deployment, and AWS for streaming data infrastructure, providing practical experience in developing production-ready LLM applications.