Building a Jenkins CI/CD Pipeline for an IoT Platform
February 2026 · 10 min readShipping firmware and backend services for industrial IoT requires a different kind of pipeline. Here's how I designed a Jenkins-based CI/CD system that handles Python packages, Docker images, Robot Framework tests, and multi-environment deployments.
Why Jenkins (Not GitHub Actions)
Our IoT platform involves hardware-in-the-loop testing — some test stages need physical devices or VPN access to factory networks. Jenkins running on-premise gave us control over the execution environment that hosted CI services can't easily replicate. We also needed persistent build agents with specific tools pre-installed.
Pipeline Architecture
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Build │────▶│ Test │────▶│ Deploy │
│ │ │ │ │ │
│ • Lint code │ │ • Unit tests │ │ • Staging │
│ • Build pkg │ │ • Robot FW │ │ • Approval │
│ • Docker img │ │ • Integration│ │ • Production │
└─────────────┘ └─────────────┘ └─────────────┘
The Jenkinsfile
Here's a simplified version of our declarative pipeline:
pipeline {
agent any
environment {
REGISTRY = "your-registry.azurecr.io"
IMAGE = "iot-backend"
}
stages {
stage('Lint') {
steps {
sh 'pip install ruff'
sh 'ruff check src/'
}
}
stage('Unit Tests') {
steps {
sh 'pip install -r requirements.txt'
sh 'pytest tests/unit/ --junitxml=report.xml'
}
post {
always { junit 'report.xml' }
}
}
stage('Robot Framework') {
steps {
sh '''
robot --outputdir results/ \
--variable ENV:staging \
tests/robot/
'''
}
post {
always {
robot outputPath: 'results/',
passThreshold: 95.0
}
}
}
stage('Build & Push Image') {
steps {
script {
def tag = "${env.BUILD_NUMBER}"
sh """
docker build -t ${REGISTRY}/${IMAGE}:${tag} .
docker push ${REGISTRY}/${IMAGE}:${tag}
"""
}
}
}
stage('Deploy to Staging') {
steps {
sh '''
ansible-playbook deploy.yml \
-i inventory/staging.yml \
-e "image_tag=${BUILD_NUMBER}"
'''
}
}
stage('Approval') {
steps {
input message: 'Deploy to production?',
ok: 'Deploy'
}
}
stage('Deploy to Production') {
steps {
sh '''
ansible-playbook deploy.yml \
-i inventory/production.yml \
-e "image_tag=${BUILD_NUMBER}"
'''
}
}
}
post {
failure {
sh '''
curl -X POST "$SLACK_WEBHOOK" \
-H "Content-Type: application/json" \
-d '{"text":"Build #${BUILD_NUMBER} failed"}'
'''
}
}
}
Robot Framework Integration
The Robot Framework stage runs end-to-end tests that exercise the full platform — API calls, MQTT message flows, and database state verification. We wrote custom Python libraries to abstract complex test operations:
# libraries/IoTPlatformLibrary.py
class IoTPlatformLibrary:
def publish_telemetry(self, device_id, metric, value):
"""Publish a message to the MQTT broker and
verify it arrives in the database."""
...
def device_should_be_online(self, device_id):
"""Check the device status API."""
...
# tests/robot/smoke_test.robot
*** Settings ***
Library IoTPlatformLibrary
*** Test Cases ***
Telemetry End-to-End
Publish Telemetry sensor-01 temperature 23.5
Sleep 2s
${value}= Get Latest Value sensor-01 temperature
Should Be Equal As Numbers ${value} 23.5
Things I Learned the Hard Way
- Pin your dependencies. We hit a production issue because
pip installpulled a newer minor version of a library that changed behaviour. Now everything is pinned inrequirements.txtwith exact versions. - Fail fast. Lint and unit tests run first. If code style or basic logic is broken, the pipeline stops immediately — don't waste 20 minutes on integration tests for code that won't pass linting.
- Manual approval gates work. For IoT, automated production deploys are risky. A simple
inputstep lets a team lead review staging metrics before promoting to production. Low-tech, high-value. - Ansible over shell scripts for deployment. Our early pipelines used raw SSH commands. Ansible playbooks are idempotent, testable, and self-documenting — much safer for production.