Is there any guidelines for tuning Badger?

There is some information here: Get started —

I’m curious if you found out if more NumGoRoutines is better than less, given specific resources, like cores.

I’m looking at the various badger superflags and wondering what effects they have:
From, CLI Command Reference for super flags, “The --badger superflag allows you to set many advanced Badger options

  1. dir
  2. valuedir
  3. syncwrites
  4. numversionstokeep
  5. readonly
  6. inmemory
  7. metricsenabled
  8. memtablesize
  9. basetablesize
  10. baselevelsize
  11. levelsizemultiplier
  12. tablesizemultiplier
  13. maxlevels — Allows more than 1.1TB of data.
  14. vlogpercentile
  15. valuethreshold
  16. nummemtables
  17. blocksize
  18. bloomfalsepositive
  19. blockcachesize
  20. indexcachesize
  21. numlevelzerotables
  22. numlevelzerotablesstall
  23. valuelogfilesize
  24. valuelogmaxentries
  25. numcompactors
  26. compactl0onclose
  27. lmaxcompaction
  28. zstdcompressionlevel
  29. verifyvaluechecksum
  30. encryptionkeyrotationduration
  31. bypasslockguard
  32. checksumverificationmode
  33. detectconflicts
  34. namespaceoffset

Reference:
CLI Command Reference: Dgraph CLI Reference - Deploy
Badger options: badger package - github.com/dgraph-io/badger/v3 - Go Packages

Maybe this helpful for understanding transactions and how they can slow down things: Faq — ?