in DevOps

Increase “/dev/shm” size on kubernetes deployment

Recently we came across an issue, where our application running inside Kubernetes pod was running out of “shm” memory. From there we came to know Kubernetes default “shm” limit is 64mb irrespective of the size of nodes.

Fix

Until now there is no k8s configurable way to increase that.  But can use the hack provided here –  https://docs.okd.io/latest/dev_guide/shared_memory.html

Mounting an emptyDir to  “/dev/shm” and setting the medium to Memory did the trick!

spec:
  volumes:
  - name: dshm
    emptyDir:
      medium: Memory
  containers:
  - image: imagename:latest
    volumeMounts:
      - mountPath: /dev/shm
        name: dshm
Adding above mount point works but it will use the complete memory as tmpfs. But there is an open GitHub pull request that may give us the option for limiting the tmpfs size – https://github.com/kubernetes/kubernetes/pull/63641