Tags: #local-inference

Local LLM Runtime and Serving Platform
Docker
168.8k

ollama/ollama

Easily run, manage, and interact with open-source large language models locally on your machine.

Local AI Inference Platform
Docker
45.1k

mudler/LocalAI

An open-source AI engine that allows running various AI models (LLMs, vision, voice, image, video) locally on any hardware, including CPU-only, with drop-in API compatibility for commercial services.

Local AI Model Management Proxy
Docker
3.2k

mostlygeek/llama-swap

A high-performance Go-based proxy for hot-swapping and managing multiple local generative AI models compatible with OpenAI and Anthropic APIs.

AI/ML Library
.net
3.6k

SciSharp/LLamaSharp

A cross-platform C#/.NET library for efficient local inference of large language models (LLMs) like LLaMA and LLAVA.

AI Code Assistant Server
Docker
14.7k

fauxpilot/fauxpilot

A locally hosted, open-source alternative to GitHub Copilot, enabling private and customizable AI-powered code generation on your own hardware.

OSS Alternative

Explore the best open source alternatives to commercial software.

© 2026 OSS Alternative. hotgithub.com - All rights reserved.