Tags: #long-context
LLM Inference Optimization Engine
GPU
8.0k
LMCache/LMCache
LMCache is an LLM serving engine extension designed to significantly reduce Time-To-First-Token (TTFT) and boost throughput, especially for long-context scenarios, by intelligently reusing KV caches.
AI Research Project
huggingface
2.7k
JIA-Lab-research/LongLoRA
LongLoRA is an efficient fine-tuning method and associated models/datasets designed to extend the context window of Large Language Models (LLMs) for processing longer inputs.
Large Language Model (LLM) Development Project
transformers
7.2k
ymcui/Chinese-LLaMA-Alpaca-2
An open-source project providing Chinese LLaMA-2 and Alpaca-2 large language models with enhanced Chinese capabilities and support for ultra-long contexts up to 64K.
Multimodal AI System
2.9k
InternLM/InternLM-XComposer
A comprehensive multimodal AI system specializing in long-term streaming video and audio interactions, offering advanced vision-language understanding and composition.