in our setup, there is an offline job that write to a badgerdb (around 200GB),
and once that job is done, the db files are downloaded by a service that reads from the db.
the write process takes around 2-3 hours.
after a while, I see from the logs that the compaction takes longer and longer, and the write rate drops significantly.
since it’s a write only workload I assume that the default configs are not optimal.
for example,
the cache is non needed,
and it’s ok for the data to be less compacted (or not compacted at all),
and just compact it at the end.
any advice on how to speed the write,
or what configs to try out?