Bulk loader OOM

Hello, I have 200G rdf.gz files and try to use bulk loader to create my Dgraph(latest version from docker).
And I already used --xidmap to save my xids. Below is my command:

dgraph bulk -f=<...> -s=test.schema --ignore_errors --map_shards=32 --reduce_shards=3 --xidmap=adxid -z=0.0.0.0:5080 --log_dir=logd --badger.compression_level=16

My machine is 128G memory/4T disk
MAP process worked fine, while REDUCE process reported OOM error.

[05:47:03Z] REDUCE 02h24m35s 25.18% edge_count:748.6M edge_speed:1.771M/sec plist_count:402.2M plist_speed:951.7k/sec. Num Encoding: 0
[05:47:04Z] REDUCE 02h24m36s 25.18% edge_count:748.6M edge_speed:1.767M/sec plist_count:402.2M plist_speed:949.4k/sec. Num Encoding: 0
[05:47:05Z] REDUCE 02h24m37s 25.18% edge_count:748.6M edge_speed:1.763M/sec plist_count:402.2M plist_speed:947.2k/sec. Num Encoding: 0
[05:47:06Z] REDUCE 02h24m38s 25.18% edge_count:748.6M edge_speed:1.759M/sec plist_count:402.2M plist_speed:945.0k/sec. Num Encoding: 0
[05:47:08Z] REDUCE 02h24m40s 25.18% edge_count:748.6M edge_speed:1.750M/sec plist_count:402.2M plist_speed:940.4k/sec. Num Encoding: 0
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x1bbac29, 0x16)
        /usr/local/go/src/runtime/panic.go:1114 +0x72
runtime.sysMap(0xda18000000, 0x298000000, 0x2bf0bf8)
        /usr/local/go/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x2bdb5c0, 0x296400000, 0x2bdb5c8, 0x14b1d0)
        /usr/local/go/src/runtime/malloc.go:715 +0x1cd
runtime.(*mheap).grow(0x2bdb5c0, 0x14b1d0, 0x0)
        /usr/local/go/src/runtime/mheap.go:1286 +0x11c
runtime.(*mheap).allocSpan(0x2bdb5c0, 0x14b1d0, 0xa20000, 0x2bf0c08, 0xfffffffffffffade)

xidmap file takes 44G.
and Noticing there was already someone offered some solutions. Like Bulk loader xidmap memory optimization. But current version I don’t think there are any --limitMemory options.

Guessing REDUCE process will read all map out files into memory?

So, can anybody help me? How can I avoid OOM error? How can I successful create this graph?
It is urgent, please :fearful:

(P.S. already try single machine,single zero and alpha, didn’t work too)

Hi @jokk33, Have you tried the troubleshooting options?

Thanks for replying. I am not sure about that.
--badger.vlog=disk and --lru_mb are the params for dgraph alpha.
But I just stop alpha and use dgraph bulk inside dgraph zero docker.
How can I pass those params into dgraph bulk?

This is a known issue. The team is working on the analysis of this issue.
Meanwhile, can you try using a live loader running from a separate machine from alpha?

Also, can you try increasing the number of shards using map_shards and reduce_shards flags of bulk loader. This may be helpful if you are willing to have multiple alpha groups.

Thanks a lot