How I Set Up MQTT with Grafana for Real-Time IoT Monitoring
March 2026 · 8 min readAt work, we needed a way to monitor thousands of industrial sensors in real time. Here's how I connected MQTT (Mosquitto) to Grafana through a lightweight Python pipeline — and what I learned along the way.
The Problem
Our IoT devices publish telemetry (temperature, vibration, humidity) over MQTT. The data was landing in our backend, but engineers had no quick way to visualise trends or spot anomalies without digging through logs. We needed a real-time dashboard that anyone on the team could use.
Architecture Overview
IoT Sensors
│
│ MQTT (mosquitto:1883)
▼
┌──────────────────┐
│ Python Subscriber│──▶ PostgreSQL (time-series data)
│ (paho-mqtt) │
└──────────────────┘
│
▼
┌─────────────┐
│ Grafana │ ◀── SQL queries
│ Dashboard │
└─────────────┘
Step 1: Set Up the Mosquitto Broker
I used a Docker Compose file to run Mosquitto alongside our other services. The key configuration points:
# docker-compose.yml (excerpt)
services:
mqtt:
image: eclipse-mosquitto:2
ports:
- "1883:1883"
volumes:
- ./mosquitto/config:/mosquitto/config
- mqtt-data:/mosquitto/data
# mosquitto.conf
listener 1883
allow_anonymous false
password_file /mosquitto/config/passwd
Authentication matters even in internal networks. I generated the password file with mosquitto_passwd and mounted it as a volume. No plaintext credentials in the config.
Step 2: Python MQTT Subscriber
A lightweight Python service subscribes to the telemetry topic and inserts rows into PostgreSQL:
import paho.mqtt.client as mqtt
import psycopg2
import json, os
def on_message(client, userdata, msg):
data = json.loads(msg.payload)
conn = userdata["db"]
with conn.cursor() as cur:
cur.execute(
"""INSERT INTO telemetry
(device_id, metric, value, ts)
VALUES (%s, %s, %s, NOW())""",
(data["device_id"], data["type"], data["value"])
)
conn.commit()
client = mqtt.Client(userdata={
"db": psycopg2.connect(os.environ["DATABASE_URL"])
})
client.username_pw_set(
os.environ["MQTT_USER"],
os.environ["MQTT_PASS"]
)
client.on_message = on_message
client.connect("mqtt", 1883)
client.subscribe("devices/+/telemetry")
client.loop_forever()
The + wildcard in the topic lets us capture all device IDs without listing them individually. Credentials come from environment variables — never hardcoded.
Step 3: PostgreSQL Schema
CREATE TABLE telemetry (
id BIGSERIAL PRIMARY KEY,
device_id VARCHAR(64) NOT NULL,
metric VARCHAR(32) NOT NULL,
value DOUBLE PRECISION NOT NULL,
ts TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Index for Grafana time-range queries
CREATE INDEX idx_telemetry_ts ON telemetry (ts DESC);
CREATE INDEX idx_telemetry_device ON telemetry (device_id, ts DESC);
The two indexes are critical. Without the composite index on (device_id, ts), Grafana queries filtering by device would do full table scans once you hit a few million rows.
Step 4: Grafana Dashboard
I added PostgreSQL as a Grafana data source and built panels with SQL queries:
-- Temperature over time (last 6 hours)
SELECT
ts AS time,
device_id,
value
FROM telemetry
WHERE
metric = 'temperature'
AND ts > NOW() - INTERVAL '6 hours'
ORDER BY ts;
Grafana's time-series panel handles the rest — one line per device, auto-refreshing every 10 seconds. I also added a stat panel for current values and an alert rule that fires when temperature exceeds 60°C.
What I'd Do Differently
- Connection pooling matters. The initial version opened a new DB connection per message. At 500 msg/sec, that killed PostgreSQL. Switching to a connection pool (or a single persistent connection with reconnect logic) solved it.
- Use QoS 1 for telemetry. QoS 0 (“fire and forget”) loses messages under load. QoS 1 guarantees at-least-once delivery, which is acceptable for sensor data where duplicates are fine but gaps are not.
- Partition your tables. After a few months, the telemetry table had hundreds of millions of rows. Time-based partitioning (built into PostgreSQL 10+) made old data cleanup trivial and kept query performance stable.