

Dify - An open-source LLM app development platform
Dify
Dify is an open-source LLM app development platform. Its intuitive interface combines agentic AI workflow, RAG pipeline, agent capabilities, model management, observability features, and more, allowing you to quickly move from prototype to production.
Quick start
Before installing Dify, make sure your machine meets the following minimum system requirements:
- CPU >= 2 Core
- RAM >= 4 GiB
The easiest way to start the Dify server is through docker compose. Before running Dify with the following commands, make sure that Docker and Docker Compose are installed on your machine:
cd difycd dockercp .env.example .envdocker compose up -d
After running, you can access the Dify dashboard in your browser at http://localhost/install and start the initialization process.
Seeking help
Please refer to our FAQ if you encounter problems setting up Dify. Reach out to the community and us if you are still having issues.
If you’d like to contribute to Dify or do additional development, refer to our guide to deploying from source code
Key features
1. Workflow: Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
2. Comprehensive model support: Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found here.
3. Prompt IDE: Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
4. RAG Pipeline: Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
5. Agent capabilities: You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
6. LLMOps: Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
7. Backend-as-a-Service: All of Dify’s offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
Feature Comparison
Feature | Dify.AI | LangChain | Flowise | OpenAI Assistants API |
---|---|---|---|---|
Programming Approach | API + App-oriented | Python Code | App-oriented | API-oriented |
Supported LLMs | Rich Variety | Rich Variety | Rich Variety | OpenAI-only |
RAG Engine | ✅ | ✅ | ✅ | ✅ |
Agent | ✅ | ✅ | ❌ | ✅ |
Workflow | ✅ | ❌ | ✅ | ❌ |
Observability | ✅ | ✅ | ❌ | ❌ |
Enterprise Feature (SSO/Access control) | ✅ | ❌ | ❌ | ❌ |
Local Deployment | ✅ | ✅ | ✅ | ❌ |
Using Dify
-
**Cloud **We host a Dify Cloud service for anyone to try with zero setup. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls in the sandbox plan.
-
**Self-hosting Dify Community Edition **Quickly get Dify running in your environment with this starter guide. Use our documentation for further references and more in-depth instructions.
-
**Dify for enterprise / organizations **We provide additional enterprise-centric features. Log your questions for us through this chatbot or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business License Inquiry) to discuss enterprise needs.
For startups and small businesses using AWS, check out Dify Premium on AWS Marketplace and deploy it to your own AWS VPC with one click. It’s an affordable AMI offering with the option to create apps with custom logo and branding.
← Back to projects