OpenShift Tips


Move from a three-node cluster to a regular 3 control-plane + workers

OCP4 can be deployed as a three-node cluster or 3 control-plane + compute nodes clusters. If you deploy a three-node cluster, the masters are labeled as workers as well, otherwise, they are labeled only as masters.

If you want to add workers to a a three-node cluster, you should do the following:

Create computes

First step is to add the desired compute nodes and scaling up the required MachineSet replicas as explained in the official documentation:

oc get nodes

NAME                                         STATUS   ROLES           AGE     VERSION   Ready    worker          3m31s   v1.20.0+bafe72f   Ready    worker          2m31s   v1.20.0+bafe72f   Ready    worker          3m24s   v1.20.0+bafe72f
kni1-vmaster-0              Ready    master,worker   20h     v1.20.0+bafe72f
kni1-vmaster-1              Ready    master,worker   20h     v1.20.0+bafe72f
kni1-vmaster-2              Ready    master,worker   20h     v1.20.0+bafe72f

Set control-plane nodes as NoSchedulable

oc patch --type merge --patch '{"spec":{"mastersSchedulable": false}}'

This will remove the worker label from the masters. The OCP components will be eventually moved to the workers as instructed by their node selectors but that process will only happen when the pods are rescheduled. This operation can be performed by deleting the pods and letting OpenShift reconciliation to reschedule them.


Rollout the latest deployment to force rescheduling the router pods without losing availability:

oc rollout -n openshift-ingress restart deployment/router-default

Or delete the router pods to force the reconciliation:

oc delete pod -n openshift-ingress -l


Rollout the latest deployment to force rescheduling the image-registry pod:

oc rollout -n openshift-image-registry restart deploy/image-registry

Or delete the image-registry pod to force the reconciliation:

oc delete pod -n openshift-image-registry -l docker-registry=default

Monitoring stack

Rollout the latest deployments and statefulsets to force rescheduling the monitoring stack pods:

oc rollout -n openshift-monitoring restart statefulset/alertmanager-main
oc rollout -n openshift-monitoring restart statefulset/prometheus-k8s
oc rollout -n openshift-monitoring restart deployment/grafana
oc rollout -n openshift-monitoring restart deployment/kube-state-metrics
oc rollout -n openshift-monitoring restart deployment/openshift-state-metrics
oc rollout -n openshift-monitoring restart deployment/prometheus-adapter
oc rollout -n openshift-monitoring restart deployment/telemeter-client
oc rollout -n openshift-monitoring restart deployment/thanos-querier

Or delete the pods to force the reconciliation:

oc delete pod -n openshift-monitoring -l app=alertmanager
oc delete pod -n openshift-monitoring -l app=prometheus
oc delete pod -n openshift-monitoring -l app=grafana
oc delete pod -n openshift-monitoring -l
oc delete pod -n openshift-monitoring -l k8s-app=openshift-state-metrics
oc delete pod -n openshift-monitoring -l name=prometheus-adapter
oc delete pod -n openshift-monitoring -l k8s-app=telemeter-client
oc delete pod -n openshift-monitoring -l

List all container images running in a cluster

oc get pods -A -o go-template --template='{{range .items}}{{range .spec.containers}}{{printf "%s\n" .image -}} {{end}}{{end}}' | sort -u | uniq

List all container images stored in a cluster

for node in $(oc get nodes -o name);do oc debug ${node} -- chroot /host sh -c 'crictl images -o json' 2>/dev/null | jq -r .images[].repoTags[]; done | sort -u
Last updated on 21 Aug 2023
Published on 21 Apr 2020
Edit on GitHub