1.8To
yes we are using tolerations and pod affinity to ensure no other process is running on the nodes beside dgraph (and the couple of small k8s sidecars like the kubelet or some monitoring agents)
likely OOMKilled ![]()
How can I monitor that process beside looking at memory usage metrics ? Can we trigger that cleanup manually ?
What do you mean by balancing exactly. Spinning up more replicas (we have 3 right now) or using sharding maybe ? Right now we connect to Dgraph over the GraphQL interface and we have a k8s service in front of the HTTP endpoint, so load should be split over the 3 alphas already.
Here is one
