Hi, we’ve encountered an issue in our production environment: when dgraph alpha node executing drop predicate, client queries to dgraph timed out and returned no results.
Looking at the metrics around the time of the incident, we noticed that a predicate move was in progress, and at the same time Badger triggered a compaction. This caused almost no I/O capacity to be available for queries, and as a result the number of queries handled dropped to zero during that period.
Given this, we are considering setting `rebalance_interval` to a very large value in order to effectively disable predicate moves. We’d like to ask if this approach could have any negative side effects.
In our use case, we typically perform a one-time full data import, and after that the write volume is relatively small. As a result, the distribution of predicates should remain fairly stable. Would this be a recommended approach in such scenarios?
Here are the metrics for the affected node at the time of the issue: the alpha query and the number of badger compactions metrics.