ScalavaBreathtaking scaling for Laravel Applications.
Check it nowIn this blog post where we will dive into the fascinating world of Grafana and Prometheus integration with Traefik in minikube.
We will explore how to install and configure Prometheus and Grafana to visualize the network requests made to our services that are running behind the Traefik ingress controller.
In order to collect and scrape Traefik metrics into Prometheus, we first need to expose the Traefik service. By creating a service manifest, we can achieve this. Let's take a closer look at the steps involved:
To begin, we will create a service called traefik-metrics in the traefik namespace. This service will expose the Traefik metrics on port 9100. By exposing this service, we gain the ability to access Traefik metrics via Kubernetes DNS using the following address: traefik-metrics.traefik.svc.cluster.local:9100
.
Now, let's examine the service manifest in more detail. By deploying this manifest, we ensure that the traefik-metrics service is up and running, ready to serve our needs.
apiVersion: v1
kind: Service
metadata:
name: traefik-metrics
namespace: traefik
spec:
ports:
- name: metrics
protocol: TCP
port: 9100
targetPort: metrics
selector:
app.kubernetes.io/instance: traefik-traefik
app.kubernetes.io/name: traefik
type: ClusterIP
In this section, we will explore Prometheus, a powerful and free software application used for event monitoring and alerting. Prometheus collects and stores metrics as time series data, providing a robust query language that enables visualization of the metrics in other applications such as Grafana.
Before we begin configuring Prometheus, let's create a Kubernetes namespace called prometheus. This namespace will serve as the environment where we deploy all components related to Prometheus.
To create the prometheus namespace, simply run the following command:
kubectl create ns prometheus
To ensure that Prometheus can scrape and collect the Traefik metrics, we need to create a ConfigMap object with the appropriate configuration.
The following manifest demonstrates how to configure Prometheus to scrape the traefik-metrics.traefik.svc.cluster.local:9100
DNS, which refers to the Traefik metrics service we exposed in the previous section.
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: prometheus
data:
prometheus.yml: |
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'traefik'
static_configs:
- targets: ['traefik-metrics.traefik.svc.cluster.local:9100']
In this section, we will ensure the persistence of the metrics collected by Prometheus from Traefik.
Prometheus stores these metrics in the /prometheus
folder.
To prevent the loss of data, we will create a PersistentVolumeClaim
Kubernetes object that will be utilized in our deployment.
You have the flexibility to adjust the storage capacity according to your needs, but for now, let's consider a capacity of 10Gi.
To create the PersistentVolumeClaim
object, use the following manifest:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-storage-persistence
namespace: prometheus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
In this section, we will deploy Prometheus using the latest stable version, v2.44.0.
We will also configure the deployment to mount the Prometheus configuration to the /etc/prometheus
folder, utilize persistent storage for the /prometheus
directory, and expose the container port 9090
for future use.
To deploy Prometheus with these configurations, use the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: prometheus
spec:
selector:
matchLabels:
app: prometheus
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:v2.44.0
ports:
- containerPort: 9090
name: default
volumeMounts:
- name: prometheus-storage
mountPath: /prometheus
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: prometheus-storage
persistentVolumeClaim:
claimName: prometheus-storage-persistence
- name: config-volume
configMap:
name: prometheus-config
In order to utilize Prometheus in other applications, whether within or outside the cluster, we need to expose it using a Kubernetes service.
Since we intend to use Prometheus only with our Grafana instance within the same cluster, we can create a service of type NodePort
.
To expose Prometheus with a NodePort service, use the following manifest:
kind: Service
apiVersion: v1
metadata:
name: prometheus
namespace: prometheus
spec:
selector:
app: prometheus
type: NodePort
ports:
- protocol: TCP
port: 9090
targetPort: 9090
nodePort: 30909
With the NodePort
service configured for Prometheus, our Prometheus instance will be accessible via the Kubernetes DNS.
To access Prometheus within the cluster, you can use the following URL: http://prometheus.prometheus.svc.cluster.local:9090
.
If you wish to access the Prometheus instance outside of the cluster, you can expose it using a Traefik IngressRoute
object.
This step is optional and mainly useful if you want to leverage the Prometheus UI to query advanced metrics.
To expose Prometheus with Traefik IngressRoute, use the following manifest as an example:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: prometheus
namespace: prometheus
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`prometheus.test`)
services:
- kind: Service
name: prometheus
port: 9090
Grafana is a powerful open-source analytics and interactive visualization web application. It allows you to create charts, graphs, and alerts for data analysis when connected to supported data sources like Prometheus.
To get started with Grafana, we first need to create a Kubernetes namespace called grafana where we will deploy all the necessary components related to Grafana.
To create the grafana namespace, use the following command:
kubectl create ns grafana
To configure Grafana and establish a connection with our Prometheus instance, we need to create a Kubernetes ConfigMap object. This configuration will include adding Prometheus as a datasource within Grafana.
Below is an example of a ConfigMap
manifest for Grafana:
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: grafana
data:
prometheus.yaml: |-
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://prometheus.prometheus.svc.cluster.local:9090",
"version": 1
}
]
}
To ensure that Grafana settings, users, dashboards, and other data are not lost in the event of a restart or cluster crash, we need to configure persistent storage for Grafana.
This can be achieved by creating a PersistentVolumeClaim
object.
Below is an example of a PersistentVolumeClaim
manifest for Grafana:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-storage-persistence
namespace: grafana
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
To deploy Grafana with the latest stable version (10.0.0), we need to create a Kubernetes deployment. Additionally, we will configure the mounting of the datasources configuration and set up persistent storage for Grafana's database and plugins.
Below is an example of a Grafana deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
securityContext:
runAsUser: 472
runAsGroup: 472
fsGroup: 472
containers:
- name: grafana
image: grafana/grafana:10.0.0
ports:
- name: grafana
containerPort: 3000
resources:
limits:
memory: "1Gi"
cpu: "1000m"
requests:
memory: 500M
cpu: "500m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
- mountPath: /etc/grafana/provisioning/datasources
name: grafana-datasources
readOnly: false
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage-persistence
- name: grafana-datasources
configMap:
defaultMode: 420
name: grafana-datasources
To make Grafana accessible outside of the cluster, we need to expose it using a Kubernetes Service
and a Traefik IngressRoute
.
This will allow external access to the Grafana web interface.
Service
First, let's create a Kubernetes Service
to expose Grafana within the cluster. We can use the following manifest as an example:
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: grafana
spec:
selector:
app: grafana
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 32000
To expose Grafana outside of the cluster using Traefik as an Ingress controller, we can create a Traefik IngressRoute
.
Here's an example manifest:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: grafana
namespace: grafana
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`grafana.test`)
services:
- kind: Service
name: grafana
port: 3000
Now that we have deployed and configured all the necessary components, let's test our installations and access the applications.
To begin, we need to run minikube tunnel
in your terminal. This command sets up a tunnel to expose services running in the Minikube cluster.
Next, we need to update the /etc/hosts
file on your machine to include the following routing:
127.0.0.1 grafana.test
127.0.0.1 prometheus.test
To update the file, open a terminal and execute the command sudo nano /etc/hosts
.
You will be prompted to enter your password, and then you can update the file with the provided routing.
Now, with everything set up, you can access our installations at https://prometheus.test and https://grafana.test in your web browser.
The default account for Grafana is admin/admin
. After logging in, navigate to Dashboards > Import > New
. You can use the ID 4475
to import the Traefik dashboard from the official Grafana.com dashboards database.
With the dashboard imported, you can now explore Grafana and create custom panels and charts to visualize your metrics.
In this blog post, we explored the setup and configuration of Prometheus and Grafana. We have successfully set up a monitoring system using Prometheus and Grafana for visualizing network requests to services running behind Traefik ingress. This configuration allows us to monitor and analyze metrics effectively, enabling better insights into our system's performance.
Happy monitoring!
Ready to start your business idea? Get in touch with us and we will handle everything else.