1.1 Tasks: Setup
Executing oc commands
Note
As you will be executing some oc commands in the following labs, make sure you have the tool installed and are logged in to your OpenShift Cluster.
You can copy the login Command from the OpenShift UI:
- Browse to http://LOCALHOST_OPENSHIFT
- Click on your name in the top right
Copy login command- Replace
6443with443
Task 1.1.1: Identify your monitoring repository
Before we get started, take the time to familiarize yourself with the your config repository. In the training you will not be working with the config repository of your team to prevent the training resources from getting in the way of the day to day business. You can find more information on how to deploy the Baloise monitoring stack for your team at Deploying the Baloise Monitoring Stack .
The working directory for this training is the folder in your config repository
with the -monitoring suffix. If necessary, create the directory <team>-monitoring.
Warning
Please name all files created in this training with the filename prefixtraining_. This naming pattern will help in cleaning up all related files after training completion.Task 1.1.2: Deploy example application
Note
We will deploy an application for demonstration purposes in our monitoring namespace. This should never be done for production use cases. If you are familiar with deploying on OpenShift, you can complete this lab by deploying the application on our test cluster.Create the following file training_python-deployment.yaml in your monitoring directory.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example-web-python
name: example-web-python
spec:
replicas: 1
selector:
matchLabels:
app: example-web-python
template:
metadata:
labels:
app: example-web-python
spec:
containers:
- image: quay.io/acend/example-web-python
imagePullPolicy: Always
name: example-web-python
restartPolicy: AlwaysUse the following command to verify that the pod of the deployment example-web-python is ready and running (use CTRL+C to exit the command).
team=<team>
oc -n $team-monitoring get pod -w -l app=example-web-python
We also need to create a Service for the new application. Create a file with the name training_python-service.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
name: example-web-python
labels:
app: example-web-python
spec:
ports:
- name: http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: example-web-python
type: ClusterIPThis created a so-called Kubernetes Service
team=<team>
oc -n $team-monitoring get svc -l app=example-web-python
Which gives you an output similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-web-python ClusterIP 172.24.195.25 <none> 5000/TCP 24s
Our example application can now be reached on port 5000.
We can now make the application directly available on our machine using port-forward
team=<team>
oc -n $team-monitoring port-forward svc/example-web-python 5000
Use curl and verify the successful deployment of our example application in a separate terminal:
curl localhost:5000/metrics
Should result in something like:
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 541.0
python_gc_objects_collected_total{generation="1"} 344.0
python_gc_objects_collected_total{generation="2"} 15.0
...
Since our newly deployed application now exposes metrics, the next thing we need to do, is to tell our Prometheus server to scrape metrics from the Kubernetes deployment. In a highly dynamic environment like Kubernetes this is done with so called Service Discovery.
Task 1.1.3: Create a ServiceMonitor
Task description:
Create a ServiceMonitor for the example application.
- Create a ServiceMonitor, which will configure Prometheus to scrape metrics from the example-web-python application every 30 seconds.
For this to work, you need to ensure:
- The example-web-python Service is labeled correctly and matches the labels you’ve defined in your ServiceMonitor.
- The port name in your ServiceMonitor configuration matches the port name in the Service definition.
- hint: check with
oc -n <team>-monitoring get service example-web-python -o yaml
- hint: check with
- Verify the target in the Prometheus user interface.
Hints
Create the following ServiceMonitor (training_python-servicemonitor.yaml):
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: example-web-python
name: example-web-python-monitor
spec:
endpoints:
- interval: 30s
port: http
scheme: http
path: /metrics
selector:
matchLabels:
app: example-web-python
Verify that the target gets scraped in the Prometheus user interface (either on CAASI or CAAST, depending where you deployed the application).
Navigate to the list of targets by clicking Status and then Targets in the menu. Target name: serviceMonitor/<team>-monitoring/example-web-python-monitor/0 (it may take up to a minute for Prometheus to load the new
configuration and scrape the metrics).