While running builds in large OpenShift cluster with large number of nodes, it becomes hard to handle how the pods are assigned in the nodes. Now if the cluster has nodes with different hardware capacity, kube scheduler assigns more pods to large nodes and small nodes does not get pods.

Goal:
we want to run jenkins pipeline builds in OpenShift/ Kubernetes cluster. we would like to run parallel builds, and make sure no more than one build runs per node at a given point of time.

We use the Jenkins plugins for openshift sync and Openshift Jenkins Pipeline plugins to trigger and run the builds in the cluster. This plugin creates dynamic jenkins slaves to run builds inside them. So we get a pod deployed in the cluster within the namespace. The pod runs with jenkins service account previledges to have enough permissions to do the builds. In our case we do run different checks along with buidling the source content. We also do dependency builds based on base image updates.

Problem:
While running a huge number of on demand builds the pods gets assigned to the high capacity nodes and small nodes stays empty. This causes a lot of delays in the builds as the context switching of the builds eats up the resources.

To solve this we tried enabling the NodeSelector feature in the kubernetes pod spec. But it happens so that the nodes we want to select for running a specific pod, gets excluded from the node selection. In jenkins pipeline nodeSelector works as node AntiAffinity instead of Affinity.

More than one pod still gets assigned to a specific node, as even though we have a node selector we dont have a pod affinity.

To solve this we need PodAntiAffinity which will not allow the kube scheduler to assing more than one pod to the same node.

Now in order to add the PodAntiAffinity and NodeAffinity we need to update the jenkins pipeline build configuration pod template.

same goes for the nodeAntiAffinity. Now the main problem is we have to know or do research about the DSL of jenkins pipeline a lot to undestand what syntax we need to add here.

Solution:
To avoid all the confusions around the DSL and allowed sytaxes, we should use the kubernetes pod configs directly in the podTemplate in the jenkins pipeline.
To get this working jenkins pipeline has yaml resource type with which we can put the whole build config into the jenkins pipeline build template. This allows all the specifications allowed by kubernetes, so we are not restricted by jenkins pipeline DSL any more.

Now kube scheduler can not assign more than one pod for a single node and we also have nodeAffinity to assign the pod to specific nodes.

source code for how this looks in the configs is: https://github.com/CentOS/container-pipeline-service/blob/master/seed-job/template.yaml