How to solve mutation conflict

thanks !
for now the test files above works fine.
for our production files, many data node and edge was lost after bulk loaded. and the bulk loader usually failed at REDUCE stage cuz of OOM (only succed 2 times), so we are trying 2 use -j 1 option 2 test again, It’s a little slow.
is there some way to start a bulk load from the maped file in tmp directory ?

and i’m a little confused if the rdf file looks like:

#<_:A_1> <dgraph.type> "A" .       
#<_:A_1> <field1> "value1" .       
<_:A_2> <dgraph.type> "A" .       
<_:A_2> <field1> "value2" .       
                                  
<_:B_1> <dgraph.type> "B" .       
<_:B_1> <field2> "aaaaaaaaaaaa" . 
<_:B_1> < hasConnectionTo> <_:A_1> .        

will the node A_1 exists after bulk loaded ?