12 KiB
InfluxDB MCP Server - Usage Examples
This document provides practical examples of using the InfluxDB MCP Server with Claude Desktop.
Table of Contents
Basic Queries
Check Server Health
Prompt to Claude:
Can you check if my InfluxDB server is healthy?
What Claude does:
- Reads the
influx://healthresource - Returns server status, version, and health check results
List All Buckets
Prompt to Claude:
Show me all the buckets in my InfluxDB instance
What Claude does:
- Reads the
influx://bucketsresource - Returns a list of all buckets with their retention policies and metadata
Query Recent Data
Prompt to Claude:
Get the last hour of CPU usage data from the "system-metrics" bucket
What Claude does:
- Uses the
query_fluxtool with a query like:
from(bucket: "system-metrics")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "cpu")
|> filter(fn: (r) => r._field == "usage")
Writing Data
Write Simple Metrics
Prompt to Claude:
Write a temperature reading of 22.5°C from the office sensor to the "sensors" bucket
What Claude does:
- Uses the
write_datatool with line protocol:
temperature,location=office value=22.5
Write Multiple Data Points
Prompt to Claude:
Write the following sensor readings to the "iot-data" bucket:
- Office temperature: 22.5°C
- Warehouse temperature: 18.3°C
- Garden humidity: 65.2%
What Claude does:
- Uses the
write_datatool with multiple lines:
temperature,location=office value=22.5
temperature,location=warehouse value=18.3
humidity,location=garden value=65.2
Write Data with Timestamps
Prompt to Claude:
Write CPU usage data with specific timestamps to the "system-metrics" bucket:
- Server1: 45.2% at timestamp 1672531200000000000
- Server2: 38.7% at timestamp 1672531200000000000
What Claude does:
- Uses the
write_datatool with timestamps:
cpu,host=server1 usage=45.2 1672531200000000000
cpu,host=server2 usage=38.7 1672531200000000000
Bucket Management
Create a New Bucket
Prompt to Claude:
Create a new bucket called "test-metrics" with a 30-day retention policy
What Claude does:
- Reads
influx://orgsto get the organization ID - Uses the
create_buckettool with:- name: "test-metrics"
- org_id: (from step 1)
- retention_seconds: 2592000 (30 days)
Create Bucket with Infinite Retention
Prompt to Claude:
Create a bucket called "permanent-logs" with no retention limit
What Claude does:
- Gets the organization ID
- Creates bucket with
retention_seconds: 0or no retention rule
Delete a Bucket
Prompt to Claude:
Delete the bucket with ID "abc123def456"
What Claude does:
- Uses the
delete_buckettool - Note: Claude will typically warn you about data loss before proceeding
Schema Discovery
List Measurements
Prompt to Claude:
What measurements are in the "application-logs" bucket?
What Claude does:
- Uses the
list_measurementstool - Returns all measurement names found in the bucket
Get Complete Schema
Prompt to Claude:
Show me the complete schema for the "iot-data" bucket including all measurements, tags, and fields
What Claude does:
- Uses the
get_bucket_schematool - Returns structured information about:
- All measurements
- Tag keys for each measurement
- Field keys for each measurement
Schema for Specific Time Range
Prompt to Claude:
What was the schema of the "metrics" bucket during the last 24 hours?
What Claude does:
- Uses the
get_bucket_schematool with:- bucket: "metrics"
- start: "-24h"
Advanced Queries
Aggregation Query
Prompt to Claude:
Calculate the average temperature for each location in the "sensors" bucket over the last 24 hours, grouped by 1-hour windows
What Claude does:
- Uses the
query_fluxtool with:
from(bucket: "sensors")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "temperature")
|> aggregateWindow(every: 1h, fn: mean)
|> group(columns: ["location"])
Multi-Measurement Query
Prompt to Claude:
Get both CPU and memory usage for server1 from the last hour
What Claude does:
- Uses the
query_fluxtool with:
from(bucket: "system-metrics")
|> range(start: -1h)
|> filter(fn: (r) =>
r.host == "server1" and
(r._measurement == "cpu" or r._measurement == "memory")
)
Join Query
Prompt to Claude:
Correlate CPU usage with memory usage for all servers in the last hour
What Claude does:
- Uses the
query_fluxtool with a join operation:
cpu = from(bucket: "system-metrics")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "cpu")
memory = from(bucket: "system-metrics")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "memory")
join(tables: {cpu: cpu, memory: memory}, on: ["_time", "host"])
Percentile Calculation
Prompt to Claude:
What's the 95th percentile of response times in the "api-metrics" bucket for the last 7 days?
What Claude does:
- Uses the
query_fluxtool with:
from(bucket: "api-metrics")
|> range(start: -7d)
|> filter(fn: (r) => r._measurement == "response_time")
|> quantile(q: 0.95)
Real-World Scenarios
IoT Temperature Monitoring
Scenario: You have temperature sensors in multiple locations and want to monitor them.
1. Setup:
Create a bucket called "iot-sensors" with a 90-day retention policy
2. Write Data:
Write the following temperature readings to "iot-sensors":
- Living room: 21.5°C
- Bedroom: 19.8°C
- Kitchen: 23.2°C
- Garage: 15.3°C
3. Query Current Status:
What are the latest temperature readings from all sensors?
4. Analyze Trends:
Show me the average temperature for each room over the last 24 hours
5. Detect Anomalies:
Find any times in the last week when any room temperature exceeded 25°C
Application Performance Monitoring
Scenario: Monitor API response times and error rates.
1. Schema Discovery:
What metrics are available in my "api-metrics" bucket?
2. Real-time Monitoring:
Show me API response times for the /users endpoint in the last 15 minutes
3. Error Analysis:
How many 5xx errors occurred in the last hour, grouped by endpoint?
4. Performance Comparison:
Compare the average response time of the /users endpoint between today and yesterday
System Resource Monitoring
Scenario: Track server CPU, memory, and disk usage.
1. Write Batch Metrics:
Write the following system metrics to "system-metrics":
- server1: CPU 45.2%, Memory 8GB, Disk 78%
- server2: CPU 38.7%, Memory 6.5GB, Disk 65%
- server3: CPU 52.1%, Memory 9.2GB, Disk 82%
2. Resource Analysis:
Which server had the highest average CPU usage in the last 24 hours?
3. Capacity Planning:
Show me the memory usage trend for all servers over the last 7 days
4. Alert Detection:
Find any instances where disk usage exceeded 80% in the last week
Financial Data Analysis
Scenario: Store and analyze stock prices or trading data.
1. Write Stock Prices:
Write the following stock prices to "market-data":
- AAPL: 178.50
- GOOGL: 142.30
- MSFT: 385.20
2. Price History:
Show me the price history for AAPL over the last 30 days
3. Daily Statistics:
Calculate the daily high, low, and average for each stock in the last week
4. Volatility Analysis:
Calculate the standard deviation of price changes for each stock over the last month
Environmental Monitoring
Scenario: Track environmental data like air quality, humidity, and pressure.
1. Multi-Sensor Write:
Write environmental data to "environment" bucket:
- Air quality index: 45 (location: downtown)
- Humidity: 68% (location: downtown)
- Pressure: 1013.25 hPa (location: downtown)
- Air quality index: 32 (location: suburbs)
- Humidity: 71% (location: suburbs)
- Pressure: 1013.50 hPa (location: suburbs)
2. Location Comparison:
Compare air quality between downtown and suburbs over the last week
3. Weather Correlation:
Show the relationship between humidity and pressure for each location
4. Data Export:
Get all environmental readings from the last month in a format I can export to CSV
Tips for Working with Claude
Be Specific About Time Ranges
Instead of: "Show me some data" Say: "Show me data from the last hour" or "Show me data from 2024-01-01 to 2024-01-31"
Specify Measurements and Fields
Instead of: "Get metrics" Say: "Get the CPU usage metric from the system-metrics bucket"
Use Natural Language
Claude understands context:
- "What's in that bucket?" (if you just discussed a bucket)
- "Show me the same thing but for yesterday"
- "Now filter that to just server1"
Ask for Explanations
- "Explain what this Flux query does"
- "Why did that query return empty results?"
- "What's the best way to query this type of data?"
Iterate on Queries
Start simple and refine:
- "Show me CPU data"
- "Now average it by hour"
- "Now compare it to yesterday"
- "Now show only when it exceeded 80%"
Request Data Visualization Suggestions
- "How should I visualize this data?"
- "What kind of chart would work best for this?"
- "Can you format this data for plotting?"
Common Patterns
Time Ranges
- Last hour:
-1h - Last 24 hours:
-24hor-1d - Last week:
-7dor-1w - Last month:
-30dor-1mo - Specific date:
2024-01-01T00:00:00Z
Filters
- Single measurement:
r._measurement == "cpu" - Multiple measurements:
(r._measurement == "cpu" or r._measurement == "memory") - Tag filter:
r.host == "server1" - Field filter:
r._field == "usage"
Aggregations
- Mean:
aggregateWindow(every: 1h, fn: mean) - Sum:
aggregateWindow(every: 1h, fn: sum) - Max:
aggregateWindow(every: 1h, fn: max) - Min:
aggregateWindow(every: 1h, fn: min) - Count:
aggregateWindow(every: 1h, fn: count)
Grouping
- By tag:
group(columns: ["host"]) - By measurement:
group(columns: ["_measurement"]) - By multiple columns:
group(columns: ["host", "region"])
Troubleshooting Examples
Empty Results
Problem: "My query returned no data"
Prompt to Claude:
I'm querying the "metrics" bucket for CPU data but getting no results. Can you help me debug?
Claude will:
- Check if the bucket exists
- List measurements in the bucket
- Verify the time range has data
- Suggest alternative query approaches
Wrong Data Format
Problem: "My write failed with a format error"
Prompt to Claude:
I'm trying to write "cpu usage=45.2" but getting an error. What's wrong?
Claude will:
- Explain line protocol format
- Show the correct format:
cpu usage=45.2(space instead of equals after measurement) - Provide more examples
Performance Issues
Problem: "My query is too slow"
Prompt to Claude:
This query is taking too long. Can you optimize it?
[paste your query]
Claude will:
- Analyze the query structure
- Suggest adding filters earlier in the pipeline
- Recommend using narrower time ranges
- Suggest appropriate aggregation windows
Best Practices
- Use descriptive tag names:
location=officenotloc=1 - Keep line protocol consistent: Always use the same tags for a measurement
- Use appropriate timestamps: Match your data's actual precision
- Filter early in Flux queries: Put filters right after
range() - Use appropriate time ranges: Don't query years of data when you need hours
- Test with small queries first: Verify logic before scaling up
- Use tags for dimensions: Put categorical data in tags, not fields
- Use fields for measurements: Put numeric data in fields
- Don't create too many unique series: Each unique tag combination creates a series