

k8sgpt
k8sgpt
k8sgpt is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.
It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI.
Out of the box integration with OpenAI, Azure, Cohere, Amazon Bedrock, Google Gemini and local models.
CLI Installation
Linux/Mac via brew
brew install k8sgpt
or
brew tap k8sgpt-ai/k8sgptbrew install k8sgpt
Windows
- Download the latest Windows binaries of k8sgpt from the Release tab based on your system architecture.
- Extract the downloaded package to your desired location. Configure the system PATH environment variable with the binary location
Operator Installation
To install within a Kubernetes cluster please use our k8sgpt-operator
with installation instructions available here
This mode of operation is ideal for continuous monitoring of your cluster and can integrate with your existing monitoring such as Prometheus and Alertmanager.
Quick Start
-
Currently, the default AI provider is OpenAI, you will need to generate an API key from
OpenAI
- You can do this by running
k8sgpt generate
to open a browser link to generate it
- You can do this by running
-
Run
k8sgpt auth addto set it in k8sgpt.
- You can provide the password directly using the
--password
flag.
- You can provide the password directly using the
-
Run
k8sgpt filters
to manage the active filters used by the analyzer. By default, all filters are executed during analysis. -
Run
k8sgpt analyze
to run a scan. -
And use
k8sgpt analyze --explain
to get a more detailed explanation of the issues. -
You also run
k8sgpt analyze --with-doc
(with or without the explain flag) to get the official documentation from Kubernetes.
Using with Claude Desktop
K8sGPT can be integrated with Claude Desktop to provide AI-powered Kubernetes cluster analysis. This integration requires K8sGPT v0.4.14 or later.
Prerequisites
-
Install K8sGPT v0.4.14 or later:
brew install k8sgpt -
Install Claude Desktop from the official website
-
Configure K8sGPT with your preferred AI backend:
k8sgpt auth
Setup
-
Start the K8sGPT MCP server:
k8sgpt serve --mcp -
In Claude Desktop:
- Open Settings
- Navigate to the Integrations section
- Add K8sGPT as a new integration
- The MCP server will be automatically detected
-
Configure Claude Desktop with the following JSON:
{ "mcpServers": { "k8sgpt": { "command": "k8sgpt", "args": [ "serve", "--mcp" ] } }}
Usage
Once connected, you can use Claude Desktop to:
- Analyze your Kubernetes cluster
- Get detailed insights about cluster health
- Receive recommendations for fixing issues
- Query cluster information
Example commands in Claude Desktop:
- “Analyze my Kubernetes cluster”
- “What’s the health status of my cluster?”
- “Show me any issues in the default namespace”
Troubleshooting
If you encounter connection issues:
- Ensure K8sGPT is running with the MCP server enabled
- Verify your Kubernetes cluster is accessible
- Check that your AI backend is properly configured
- Restart both K8sGPT and Claude Desktop
For more information, visit our documentation.
Analyzers
K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers.
Built in analyzers
Enabled by default
- podAnalyzer
- pvcAnalyzer
- rsAnalyzer
- serviceAnalyzer
- eventAnalyzer
- ingressAnalyzer
- statefulSetAnalyzer
- deploymentAnalyzer
- jobAnalyzer
- cronJobAnalyzer
- nodeAnalyzer
- mutatingWebhookAnalyzer
- validatingWebhookAnalyzer
- configMapAnalyzer
Optional
- hpaAnalyzer
- pdbAnalyzer
- networkPolicyAnalyzer
- gatewayClass
- gateway
- httproute
- logAnalyzer
- storageAnalyzer
- securityAnalyzer
Examples
Run a scan with the default analyzers
k8sgpt generatek8sgpt auth addk8sgpt analyze --explaink8sgpt analyze --explain --with-doc
Filter on resource
k8sgpt analyze --explain --filter=Service
Filter by namespace
k8sgpt analyze --explain --filter=Pod --namespace=default
Output to JSON
k8sgpt analyze --explain --filter=Service --output=json
Anonymize during explain
k8sgpt analyze --explain --filter=Service --output=json --anonymize
LLM AI Backends
K8sGPT uses the chosen LLM, generative AI provider when you want to explain the analysis results using —explain flag e.g. k8sgpt analyze --explain
. You can use --backend
flag to specify a configured provider (it’s openai
by default).
You can list available providers using k8sgpt auth list
:
Default:> openaiActive:Unused:> openai> localai> ollama> azureopenai> cohere> amazonbedrock> amazonsagemaker> google> huggingface> noopai> googlevertexai> watsonxai> customrest> ibmwatsonxai
For detailed documentation on how to configure and use each provider see here.
To set a new default provider
k8sgpt auth default -p azureopenaiDefault provider set to azureopenai
Using Amazon Bedrock with inference profiles
System Inference Profile
k8sgpt auth add --backend amazonbedrock --providerRegion us-east-1 --model arn:aws:bedrock:us-east-1:123456789012:inference-profile/my-inference-profile
Application Inference Profile
k8sgpt auth add --backend amazonbedrock --providerRegion us-east-1 --model arn:aws:bedrock:us-east-1:123456789012:application-inference-profile/2uzp4s0w39t6
Documentation
Find our official documentation available here
← Back to projects