k8sgpt k8sgpt

k8sgpt

k8sgpt

k8sgpt is a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.

It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI.

Out of the box integration with OpenAI, Azure, Cohere, Amazon Bedrock, Google Gemini and local models.

img

CLI Installation

Linux/Mac via brew

brew install k8sgpt

or

brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt

Windows

  • Download the latest Windows binaries of k8sgpt from the Release tab based on your system architecture.
  • Extract the downloaded package to your desired location. Configure the system PATH environment variable with the binary location

Operator Installation

To install within a Kubernetes cluster please use our k8sgpt-operator with installation instructions available here

This mode of operation is ideal for continuous monitoring of your cluster and can integrate with your existing monitoring such as Prometheus and Alertmanager.

Quick Start

  • Currently, the default AI provider is OpenAI, you will need to generate an API key from

    OpenAI

    • You can do this by running k8sgpt generate to open a browser link to generate it
  • Run

    k8sgpt auth add

    to set it in k8sgpt.

    • You can provide the password directly using the --password flag.
  • Run k8sgpt filters to manage the active filters used by the analyzer. By default, all filters are executed during analysis.

  • Run k8sgpt analyze to run a scan.

  • And use k8sgpt analyze --explain to get a more detailed explanation of the issues.

  • You also run k8sgpt analyze --with-doc (with or without the explain flag) to get the official documentation from Kubernetes.

Using with Claude Desktop

K8sGPT can be integrated with Claude Desktop to provide AI-powered Kubernetes cluster analysis. This integration requires K8sGPT v0.4.14 or later.

Prerequisites

  1. Install K8sGPT v0.4.14 or later:

    brew install k8sgpt
  2. Install Claude Desktop from the official website

  3. Configure K8sGPT with your preferred AI backend:

    k8sgpt auth

Setup

  1. Start the K8sGPT MCP server:

    k8sgpt serve --mcp
  2. In Claude Desktop:

    • Open Settings
    • Navigate to the Integrations section
    • Add K8sGPT as a new integration
    • The MCP server will be automatically detected
  3. Configure Claude Desktop with the following JSON:

{
"mcpServers": {
"k8sgpt": {
"command": "k8sgpt",
"args": [
"serve",
"--mcp"
]
}
}
}

Usage

Once connected, you can use Claude Desktop to:

  • Analyze your Kubernetes cluster
  • Get detailed insights about cluster health
  • Receive recommendations for fixing issues
  • Query cluster information

Example commands in Claude Desktop:

  • “Analyze my Kubernetes cluster”
  • “What’s the health status of my cluster?”
  • “Show me any issues in the default namespace”

Troubleshooting

If you encounter connection issues:

  1. Ensure K8sGPT is running with the MCP server enabled
  2. Verify your Kubernetes cluster is accessible
  3. Check that your AI backend is properly configured
  4. Restart both K8sGPT and Claude Desktop

For more information, visit our documentation.

Analyzers

K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers.

Built in analyzers

Enabled by default

  • podAnalyzer
  • pvcAnalyzer
  • rsAnalyzer
  • serviceAnalyzer
  • eventAnalyzer
  • ingressAnalyzer
  • statefulSetAnalyzer
  • deploymentAnalyzer
  • jobAnalyzer
  • cronJobAnalyzer
  • nodeAnalyzer
  • mutatingWebhookAnalyzer
  • validatingWebhookAnalyzer
  • configMapAnalyzer

Optional

  • hpaAnalyzer
  • pdbAnalyzer
  • networkPolicyAnalyzer
  • gatewayClass
  • gateway
  • httproute
  • logAnalyzer
  • storageAnalyzer
  • securityAnalyzer

Examples

Run a scan with the default analyzers

k8sgpt generate
k8sgpt auth add
k8sgpt analyze --explain
k8sgpt analyze --explain --with-doc

Filter on resource

k8sgpt analyze --explain --filter=Service

Filter by namespace

k8sgpt analyze --explain --filter=Pod --namespace=default

Output to JSON

k8sgpt analyze --explain --filter=Service --output=json

Anonymize during explain

k8sgpt analyze --explain --filter=Service --output=json --anonymize

LLM AI Backends

K8sGPT uses the chosen LLM, generative AI provider when you want to explain the analysis results using —explain flag e.g. k8sgpt analyze --explain. You can use --backend flag to specify a configured provider (it’s openai by default).

You can list available providers using k8sgpt auth list:

Default:
> openai
Active:
Unused:
> openai
> localai
> ollama
> azureopenai
> cohere
> amazonbedrock
> amazonsagemaker
> google
> huggingface
> noopai
> googlevertexai
> watsonxai
> customrest
> ibmwatsonxai

For detailed documentation on how to configure and use each provider see here.

To set a new default provider

k8sgpt auth default -p azureopenai
Default provider set to azureopenai

Using Amazon Bedrock with inference profiles

System Inference Profile

k8sgpt auth add --backend amazonbedrock --providerRegion us-east-1 --model arn:aws:bedrock:us-east-1:123456789012:inference-profile/my-inference-profile

Application Inference Profile

k8sgpt auth add --backend amazonbedrock --providerRegion us-east-1 --model arn:aws:bedrock:us-east-1:123456789012:application-inference-profile/2uzp4s0w39t6

Documentation

Find our official documentation available here


← Back to projects