How to deploy Prometheus on Openshift 4 and configure it to scrape Pods metrics

Step 1: Setting up Openshift

The simplest way to setup Openshift locally is using . It provides a minimal “Openshift 4” installation on a VM running locally. Follow the steps in the link above to setup you’re cluster.

Step 2: Setting up Prometheus

To set up Prometheus lets first create a namespace for it:

oc new-project monitoring && oc project monitoring

Next, we need to deploy Prometheus:

oc new-app prom/prometheus

The above command will pull the latest Prometheus image to our clusters registry and create a “Deployment” of a single pod running Prometheus.

Step 3: Expose Prometheus

There are several ways to expose and application running on Openshift, I find the simplest way, for my own local use cases, is using a NodePort type service. (For more details see .)

To create a service for our prometheus application run the following command:

oc expose dc prometheus --type=NodePort --generator=service/v1

This will create a service for Prometheus:

$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus NodePort 172.30.84.198 <none> 9090:31138/TCP 9h

Notice the port assigned to the service; this port will be used to access the service behind the clusters IP address. In this example the port assigned to my service is 31138.

To retrieve the crc cluster ip address run crc ip . Finally to access Prometheus use <cluster-ip>:31138 .

Step 4: Configure Prometheus RBAC

If you look at the logs of the Prometheus pod you have just deployed (by running oc logs promtheus ) you will find that its showing a lot of warnings and errors related to the fact that it isn’t authorized to List or Watch for Pods . In order to allow Prometheus to list and watch for pods, we need to create a ClusterRole and a ClusterRoleBinding for it:

For example here is a ClusterRole I create for Prometheus:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch

And a corresponding ClusterRoleBinding attached to the default service account in the monitoring project, used by Prometheus .

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prom_role_binding
subjects:
- kind: ServiceAccount
name: default
namespace: monitoring
roleRef:
kind: ClusterRole
name: prometheus
apiGroup: rbac.authorization.k8s.io

Now Prometheus is ready to scrape!

Step 5: Configure Prometheus to scrape pods metrics

In order to configure Prometheus to scrape our pods metrics we need to supply it with a configuration file, to do that we will create our configuration file in a ConfigMap and mount it to a predefined path in the Prometheus container (namely /etc/prometheus/ ).

Here is and example config file I created to scrape all pods matching the label app=ocm-api-tests . The kubernetes_sd_configs allow Prometheus to retrieve scrape targets from REST API and stay synchronized with the cluster state. In our example Prometheus is configured to discover pods running.

These are concurrent pods which run “load” testing against one of our backend services at “cloud.redhat.com”. Once the pods finish running their request against the backend service they expose their results (latency, throughout and errors) for Prometheus to scrape.

To find out more details about the scraping rules see .

$ cat prometheus.yml
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: ocm-api-tests
scrape_interval: 10s
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: k8s_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: k8s_pod_name
- source_labels: [__address__]
action: replace
regex: ([^:]+)(?::\d+)?
replacement: ${1}:8080
target_label: __address__
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: ocm-api-tests

Its important to name this file prometheus.yml since Prometheus is going to be looking for a config file with this specific name in the mounted path.

To create the the ConfigMap run the following:

oc create configmap prom-config --from-file=prometheus.yml

This will create a configmap named prom-config with the content of our config file.

Next, we need to edit the Prometheus deployment to mount this configmap under /etc/prometheus , to do this, run the following command:

oc edit dc prometheus

This command should open you’re default editor with the contents of the deployment file. Under volumes we will create a new volume pointing to our ConfigMap :

      volumes:
- emptyDir: {}
name: prometheus-volume-1
- configMap:
defaultMode: 420
name: prom-config
name: prom-config-volume

Next, under the Prometheus container we will create the volume mount:

        volumeMounts:
- mountPath: /prometheus
name: prometheus-volume-1
- mountPath: /etc/prometheus/
name: prom-config-volume

Once saved, the edit will trigger a redeployment of Prometheus with our configuration in place.

Thats it, were all done! Hope you enjoyed this walk through.

Software Engineer; Rock Climber; Hiker; Runner; Father; Partner.