Badger panics after db reaches 1.1T

Thanks for all this! It looks to me like the levels are numbered 0 to maxLevels-1, with Level 0 consisting of the in-memory SST structures. That’s from a quick scan of some code, and could be wrong.

I agree that it will be more friendly if the server would reject writes if there is no more room in the levels but continue to serve reads, and in any case “Base level can’t be zero” is not that informative without googling and finding this or a similar online forum message. If you are willing, perhaps you can add a feature request for that in github.

Is your overall DB 1.1TB (before re-compaction) or is each group holding 1.1TB, so 4+TB across 4 groups? I would expect the latter. Or at least close to 1TB per group since balancing predicates among the groups will never be perfect.

We could alter default maxLevels but that would allow 10TB on a single machine, which probably creates a different set of problems. Perhaps changing the levelMultiplier to 11 or 12 would be better, but even that depends on the machines and use case for a database, so it may be good for people to really think about data size per machine up around 1TB each, and get an earlier or less severe warning/error.

1 Like