When Kubernetes pods won’t leave the terminating state, this suggests that the underlying node is likely broken. When this occurs, apps may fail to schedule, causing unavailability. This can become a financial drain on your organization because this issue can lead to unnecessary scaling.
This is a difficult issue for many teams to diagnose because Kubernetes pods are often in the terminating state, meaning it’s tricky to know which ones have been around for too long. Fixing this issue is complex since Node draining in Kubernetes must be configured in a way to work for your environment. This will need to take into account time-out periods, pod disruption policies and other cluster-wide configurations.