IBM Conductor for Containers has been rebranded IBM Cloud private with version 1.2.0 (https://www.ibm.com/developerworks/community/blogs/fe25b4ef-ea6a-4d86-a629-6f87ccf4649e/entry/IBM_Cloud_private_formerly_IBM_Spectrum_Conductor_for_Containers_version_1_2_0_is_now_available?lang=en)
IBM released version 6.0.0.1 of Orient Me and with it added new applications increasing the total amount of pods in play. Each pod requires some resources to run. Recently there has been some frustration for those who work with Connections trying to get Orient Me up and running on smaller servers for testing purposes or for deployment to SMB customers.
I spent some time looking at how to limit the resources consumed by decreasing the number of pods.
Kubernetes allows you to scale up or down your pods. This can be done on the command line or via the UI
![]()
Since I prefer the command line here is how you scale an application and it’s effect on the number of pods. There are two ways in which this is done, by Replica Sets and Stateful Sets. I won’t go into the difference of both because I’m not even wholly sure myself but suffice to say that most of OM applications use Replica Sets.
Replica Sets
I’m using analysisservice as an example because it is at the top when commands are run.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
analysisservice-1093785398-31ks2 1/1 Running 0 8m
analysisservice-1093785398-hf90j 1/1 Running 0 8m
# kubectl get rs
NAME DESIRED CURRENT READY AGE
analysisservice-1093785398 2 2 2 9m
The following command tells K8s to change the number of pods to be 1 that will accept load.
# kubectl scale –replicas=1 rs/analysisservice-1093785398
replicaset “analysisservice-1093785398” scaled
Below shows that just the one pod is ready to accept load. Note that the desired number is two. This means that this will be the default value if all the pods are deleted or the OS restarted.
# kubectl get rs
NAME DESIRED CURRENT READY AGE
analysisservice-1093785398 2 2 1 9m
The pod that is going to not accept load is destroyed and a new one replaces it.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
analysisservice-1093785398-31ks2 1/1 Running 0 18m
analysisservice-1093785398-4njpn 1/1 Terminating 0 5m
analysisservice-1093785398-fmnrd 0/1 ContainerCreating 0 3s
You can see that the new pod is not “ready” and thus not accepting any load.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
analysisservice-1093785398-31ks2 1/1 Running 0 19m
analysisservice-1093785398-fmnrd 0/1 Running 0 43s
The reverse is true and you can scale the number of pods upwards. ICp can do this with policies based on CPU usage creating more pods and then decreasing them when the load drops.
The above approach does not persist over OS restarts or deletion of all the pods. To persist these changes the following steps need to be followed.
# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analysisservice 2 2 2 2 34m
This command amends the deployment configuration which was set in complete.6_0.yaml in the OM binaries.
# kubectl edit deployment analysisservice
apiVersion: extensions/v1beta1
kind: Deployment
This will open in vi though you can change your editor if you prefer. Under the spec section you want to amend the number of replicas
spec:
replicas: 1
selector:
matchLabels:
mService: analysisservice
name: analysisservice
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
Ignore the status section. Save and close (:wq)
# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analysisservice 1 1 1 1 44m
This time the second pod is not listed with a 0/1 ready value. The second pod has been deleted.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
analysisservice-1093785398-kz76m 1/1 Running 0 17m
You can use the following command to open all application deployments and update using vi all the applications at one time.
# kubectl edit deployment
When you save and close the applications will be updated in line the values you set for the replicas.
# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analysisservice 1 1 1 1 55m
haproxy 1 1 1 1 57m
indexingservice 1 1 1 1 55m
itm-services 1 1 1 1 55m
mail-service 1 1 1 1 55m
orient-webclient 1 1 1 1 55m
people-migrate 1 1 1 1 55m
people-relation 1 1 1 1 55m
people-scoring 1 1 1 1 55m
redis-sentinel 1 1 1 1 57m
retrievalservice 1 1 1 1 55m
solr1 1 1 1 1 57m
solr2 1 1 1 1 57m
solr3 1 1 1 1 57m
zookeeper-controller-1 1 1 1 1 57m
zookeeper-controller-2 1 1 1 1 57m
zookeeper-controller-3 1 1 1 1 57m
Running the following shows the number of pods have decreased by quite a lot.
# kubectl get pods
Checking the ReplicaSets again shows the values have decreased.
# kubectl get rs
Mongo and redis-server do not use Replica Sets, they use StatefulSets.
StatefulSets
The following command shows that there are 3 pods for each application.
# kubectl get statefulsets
NAME DESIRED CURRENT AGE
mongo 3 3 1h
redis-server 3 3 1h
In the same vain as before you edit the replicas decreasing/increasing them as you see fit.
# kubectl edit statefulsets
statefulset “mongo” edited
statefulset “redis-server” edited
The end result is that only the one ReplicaSet is configured.
# kubectl get statefulsets
NAME DESIRED CURRENT AGE
mongo 1 1 1h
redis-server 1 1 1h
The effect is seen when you list the pods.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-0 2/2 Running 0 1h
redis-server-0 1/1 Running 0 1h
At install time
These changes can be made at install time by updating the various .yml files in /microservices/hybridcloud/templates/* and /microservices/hybridcloud/templates/complete.6_0.yaml and then running install.sh.
Finally
I have only experimented on the default applications and have not touched those from the kube-system namespace which are the ICp applications and not OM specific.
I haven’t tried this on a working system yet, purely a detached single node running all roles with hostpath configuration.
Since there is no load on the server my measurements with regards to resources consumed pre and post changes is far from scientific but looking at the UI the amount of CPU and memory is certainly less then previously used.
I have no idea as yet whether this will break OM but I will persist and see whether it does or whether it works swimmingly. If anyone tries this out then please feedback to me.
BTW – I restarted the OS and had a couple of problems with analysisservice and indexingservice pods not being ready and shown as unhealthy but after deleting haproxy, redis-server-0 and redis-sentinel all my pods are showing as healthy.
IBM, please please provide a relatively simple way (ideally at install time) for us to cut the deployment down to bare bones maybe a small, medium or large deployment as you do with traditional Connections?