LLM Experimentation and Evaluation Tool
3.0k 2026-04-18
hegelai/prompttools
An open-source, self-hostable toolkit for testing, experimenting with, and evaluating prompts, large language models (LLMs), and vector databases.
Core Features
Experiment with various LLMs (e.g., OpenAI, LLaMA, Anthropic) and their parameters.
Test and evaluate the performance and accuracy of prompts.
Assess retrieval accuracy of different vector databases (e.g., Chroma, Weaviate).
Offers flexible interfaces including code, Jupyter notebooks, and a local Streamlit playground.
Supports a broad range of integrations with popular LLM APIs and vector databases.
Quick Start
pip install prompttoolsDetailed Introduction
PromptTools is an open-source project by Hegel AI designed to empower developers with robust tools for LLM and prompt engineering. It facilitates comprehensive experimentation, testing, and evaluation of large language models, their prompts, and associated vector databases. By offering self-hostable solutions through code, notebooks, and a user-friendly playground, PromptTools enables precise performance tuning and comparison across diverse models and data retrieval systems, ensuring optimal application development without compromising data privacy.