Files
influxdb-mcp-server/EXAMPLES.md
Felix Zösch 3e23474476 Initial commit
2025-12-11 20:30:12 +01:00

12 KiB

InfluxDB MCP Server - Usage Examples

This document provides practical examples of using the InfluxDB MCP Server with Claude Desktop.

Table of Contents

Basic Queries

Check Server Health

Prompt to Claude:

Can you check if my InfluxDB server is healthy?

What Claude does:

  • Reads the influx://health resource
  • Returns server status, version, and health check results

List All Buckets

Prompt to Claude:

Show me all the buckets in my InfluxDB instance

What Claude does:

  • Reads the influx://buckets resource
  • Returns a list of all buckets with their retention policies and metadata

Query Recent Data

Prompt to Claude:

Get the last hour of CPU usage data from the "system-metrics" bucket

What Claude does:

  • Uses the query_flux tool with a query like:
from(bucket: "system-metrics")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "cpu")
  |> filter(fn: (r) => r._field == "usage")

Writing Data

Write Simple Metrics

Prompt to Claude:

Write a temperature reading of 22.5°C from the office sensor to the "sensors" bucket

What Claude does:

  • Uses the write_data tool with line protocol:
temperature,location=office value=22.5

Write Multiple Data Points

Prompt to Claude:

Write the following sensor readings to the "iot-data" bucket:
- Office temperature: 22.5°C
- Warehouse temperature: 18.3°C
- Garden humidity: 65.2%

What Claude does:

  • Uses the write_data tool with multiple lines:
temperature,location=office value=22.5
temperature,location=warehouse value=18.3
humidity,location=garden value=65.2

Write Data with Timestamps

Prompt to Claude:

Write CPU usage data with specific timestamps to the "system-metrics" bucket:
- Server1: 45.2% at timestamp 1672531200000000000
- Server2: 38.7% at timestamp 1672531200000000000

What Claude does:

  • Uses the write_data tool with timestamps:
cpu,host=server1 usage=45.2 1672531200000000000
cpu,host=server2 usage=38.7 1672531200000000000

Bucket Management

Create a New Bucket

Prompt to Claude:

Create a new bucket called "test-metrics" with a 30-day retention policy

What Claude does:

  1. Reads influx://orgs to get the organization ID
  2. Uses the create_bucket tool with:
    • name: "test-metrics"
    • org_id: (from step 1)
    • retention_seconds: 2592000 (30 days)

Create Bucket with Infinite Retention

Prompt to Claude:

Create a bucket called "permanent-logs" with no retention limit

What Claude does:

  1. Gets the organization ID
  2. Creates bucket with retention_seconds: 0 or no retention rule

Delete a Bucket

Prompt to Claude:

Delete the bucket with ID "abc123def456"

What Claude does:

  • Uses the delete_bucket tool
  • Note: Claude will typically warn you about data loss before proceeding

Schema Discovery

List Measurements

Prompt to Claude:

What measurements are in the "application-logs" bucket?

What Claude does:

  • Uses the list_measurements tool
  • Returns all measurement names found in the bucket

Get Complete Schema

Prompt to Claude:

Show me the complete schema for the "iot-data" bucket including all measurements, tags, and fields

What Claude does:

  • Uses the get_bucket_schema tool
  • Returns structured information about:
    • All measurements
    • Tag keys for each measurement
    • Field keys for each measurement

Schema for Specific Time Range

Prompt to Claude:

What was the schema of the "metrics" bucket during the last 24 hours?

What Claude does:

  • Uses the get_bucket_schema tool with:
    • bucket: "metrics"
    • start: "-24h"

Advanced Queries

Aggregation Query

Prompt to Claude:

Calculate the average temperature for each location in the "sensors" bucket over the last 24 hours, grouped by 1-hour windows

What Claude does:

  • Uses the query_flux tool with:
from(bucket: "sensors")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> aggregateWindow(every: 1h, fn: mean)
  |> group(columns: ["location"])

Multi-Measurement Query

Prompt to Claude:

Get both CPU and memory usage for server1 from the last hour

What Claude does:

  • Uses the query_flux tool with:
from(bucket: "system-metrics")
  |> range(start: -1h)
  |> filter(fn: (r) =>
    r.host == "server1" and
    (r._measurement == "cpu" or r._measurement == "memory")
  )

Join Query

Prompt to Claude:

Correlate CPU usage with memory usage for all servers in the last hour

What Claude does:

  • Uses the query_flux tool with a join operation:
cpu = from(bucket: "system-metrics")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "cpu")

memory = from(bucket: "system-metrics")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "memory")

join(tables: {cpu: cpu, memory: memory}, on: ["_time", "host"])

Percentile Calculation

Prompt to Claude:

What's the 95th percentile of response times in the "api-metrics" bucket for the last 7 days?

What Claude does:

  • Uses the query_flux tool with:
from(bucket: "api-metrics")
  |> range(start: -7d)
  |> filter(fn: (r) => r._measurement == "response_time")
  |> quantile(q: 0.95)

Real-World Scenarios

IoT Temperature Monitoring

Scenario: You have temperature sensors in multiple locations and want to monitor them.

1. Setup:

Create a bucket called "iot-sensors" with a 90-day retention policy

2. Write Data:

Write the following temperature readings to "iot-sensors":
- Living room: 21.5°C
- Bedroom: 19.8°C
- Kitchen: 23.2°C
- Garage: 15.3°C

3. Query Current Status:

What are the latest temperature readings from all sensors?

4. Analyze Trends:

Show me the average temperature for each room over the last 24 hours

5. Detect Anomalies:

Find any times in the last week when any room temperature exceeded 25°C

Application Performance Monitoring

Scenario: Monitor API response times and error rates.

1. Schema Discovery:

What metrics are available in my "api-metrics" bucket?

2. Real-time Monitoring:

Show me API response times for the /users endpoint in the last 15 minutes

3. Error Analysis:

How many 5xx errors occurred in the last hour, grouped by endpoint?

4. Performance Comparison:

Compare the average response time of the /users endpoint between today and yesterday

System Resource Monitoring

Scenario: Track server CPU, memory, and disk usage.

1. Write Batch Metrics:

Write the following system metrics to "system-metrics":
- server1: CPU 45.2%, Memory 8GB, Disk 78%
- server2: CPU 38.7%, Memory 6.5GB, Disk 65%
- server3: CPU 52.1%, Memory 9.2GB, Disk 82%

2. Resource Analysis:

Which server had the highest average CPU usage in the last 24 hours?

3. Capacity Planning:

Show me the memory usage trend for all servers over the last 7 days

4. Alert Detection:

Find any instances where disk usage exceeded 80% in the last week

Financial Data Analysis

Scenario: Store and analyze stock prices or trading data.

1. Write Stock Prices:

Write the following stock prices to "market-data":
- AAPL: 178.50
- GOOGL: 142.30
- MSFT: 385.20

2. Price History:

Show me the price history for AAPL over the last 30 days

3. Daily Statistics:

Calculate the daily high, low, and average for each stock in the last week

4. Volatility Analysis:

Calculate the standard deviation of price changes for each stock over the last month

Environmental Monitoring

Scenario: Track environmental data like air quality, humidity, and pressure.

1. Multi-Sensor Write:

Write environmental data to "environment" bucket:
- Air quality index: 45 (location: downtown)
- Humidity: 68% (location: downtown)
- Pressure: 1013.25 hPa (location: downtown)
- Air quality index: 32 (location: suburbs)
- Humidity: 71% (location: suburbs)
- Pressure: 1013.50 hPa (location: suburbs)

2. Location Comparison:

Compare air quality between downtown and suburbs over the last week

3. Weather Correlation:

Show the relationship between humidity and pressure for each location

4. Data Export:

Get all environmental readings from the last month in a format I can export to CSV

Tips for Working with Claude

Be Specific About Time Ranges

Instead of: "Show me some data" Say: "Show me data from the last hour" or "Show me data from 2024-01-01 to 2024-01-31"

Specify Measurements and Fields

Instead of: "Get metrics" Say: "Get the CPU usage metric from the system-metrics bucket"

Use Natural Language

Claude understands context:

  • "What's in that bucket?" (if you just discussed a bucket)
  • "Show me the same thing but for yesterday"
  • "Now filter that to just server1"

Ask for Explanations

  • "Explain what this Flux query does"
  • "Why did that query return empty results?"
  • "What's the best way to query this type of data?"

Iterate on Queries

Start simple and refine:

  1. "Show me CPU data"
  2. "Now average it by hour"
  3. "Now compare it to yesterday"
  4. "Now show only when it exceeded 80%"

Request Data Visualization Suggestions

  • "How should I visualize this data?"
  • "What kind of chart would work best for this?"
  • "Can you format this data for plotting?"

Common Patterns

Time Ranges

  • Last hour: -1h
  • Last 24 hours: -24h or -1d
  • Last week: -7d or -1w
  • Last month: -30d or -1mo
  • Specific date: 2024-01-01T00:00:00Z

Filters

  • Single measurement: r._measurement == "cpu"
  • Multiple measurements: (r._measurement == "cpu" or r._measurement == "memory")
  • Tag filter: r.host == "server1"
  • Field filter: r._field == "usage"

Aggregations

  • Mean: aggregateWindow(every: 1h, fn: mean)
  • Sum: aggregateWindow(every: 1h, fn: sum)
  • Max: aggregateWindow(every: 1h, fn: max)
  • Min: aggregateWindow(every: 1h, fn: min)
  • Count: aggregateWindow(every: 1h, fn: count)

Grouping

  • By tag: group(columns: ["host"])
  • By measurement: group(columns: ["_measurement"])
  • By multiple columns: group(columns: ["host", "region"])

Troubleshooting Examples

Empty Results

Problem: "My query returned no data"

Prompt to Claude:

I'm querying the "metrics" bucket for CPU data but getting no results. Can you help me debug?

Claude will:

  1. Check if the bucket exists
  2. List measurements in the bucket
  3. Verify the time range has data
  4. Suggest alternative query approaches

Wrong Data Format

Problem: "My write failed with a format error"

Prompt to Claude:

I'm trying to write "cpu usage=45.2" but getting an error. What's wrong?

Claude will:

  1. Explain line protocol format
  2. Show the correct format: cpu usage=45.2 (space instead of equals after measurement)
  3. Provide more examples

Performance Issues

Problem: "My query is too slow"

Prompt to Claude:

This query is taking too long. Can you optimize it?
[paste your query]

Claude will:

  1. Analyze the query structure
  2. Suggest adding filters earlier in the pipeline
  3. Recommend using narrower time ranges
  4. Suggest appropriate aggregation windows

Best Practices

  1. Use descriptive tag names: location=office not loc=1
  2. Keep line protocol consistent: Always use the same tags for a measurement
  3. Use appropriate timestamps: Match your data's actual precision
  4. Filter early in Flux queries: Put filters right after range()
  5. Use appropriate time ranges: Don't query years of data when you need hours
  6. Test with small queries first: Verify logic before scaling up
  7. Use tags for dimensions: Put categorical data in tags, not fields
  8. Use fields for measurements: Put numeric data in fields
  9. Don't create too many unique series: Each unique tag combination creates a series

Additional Resources