Not able to see ingested data after bulk loading

I have loaded data using bulk loader using the below command

dgraph bulk -f /home/mouninka_mandadic/Downloads/Dgraph/sample_data.rdf -s /home/mouninka_mandadic/schema1 --ignore_errors

Connecting to zero at localhost:5080
___ Begin jemalloc statistics ___
Version: "5.2.1-0-gea6b3e973b477b8061e0076bb257dbd7f3faa756"
Build-time option settings
  config.cache_oblivious: true
  config.debug: false
  config.fill: true
  config.lazy_lock: false
  config.malloc_conf: "background_thread:true,metadata_thp:auto"
  config.opt_safety_checks: false
  config.prof: true
  config.prof_libgcc: true
  config.prof_libunwind: false
  config.stats: true
  config.utrace: false
  config.xmalloc: false
Run-time option settings
  opt.abort: false
  opt.abort_conf: false
  opt.confirm_conf: false
  opt.retain: true
  opt.dss: "secondary"
  opt.narenas: 32
  opt.percpu_arena: "disabled"
  opt.oversize_threshold: 8388608
  opt.metadata_thp: "auto"
  opt.background_thread: true (background_thread: true)
  opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
  opt.muzzy_decay_ms: 0 (arenas.muzzy_decay_ms: 0)
  opt.lg_extent_max_active_fit: 6
  opt.junk: "false"
  opt.zero: false
  opt.tcache: true
  opt.lg_tcache_max: 15
  opt.thp: "default"
  opt.prof: false
  opt.prof_prefix: "jeprof"
  opt.prof_active: true (prof.active: false)
  opt.prof_thread_active_init: true (prof.thread_active_init: false)
  opt.lg_prof_sample: 19 (prof.lg_sample: 0)
  opt.prof_accum: false
  opt.lg_prof_interval: -1
  opt.prof_gdump: false
  opt.prof_final: false
  opt.prof_leak: false
  opt.stats_print: false
  opt.stats_print_opts: ""
Profiling settings
  prof.thread_active_init: false
  prof.active: false
  prof.gdump: false
  prof.interval: 0
  prof.lg_sample: 0
Arenas: 33
Quantum size: 16
Page size: 4096
Maximum thread-cached size class: 32768
Number of bin size classes: 36
Number of thread-cache bin size classes: 41
Number of large size classes: 196
Allocated: 58472, active: 90112, metadata: 3479000 (n_thp 0), resident: 3522560, mapped: 8478720, retained: 2007040
Background threads: 2, num_runs: 3, run_interval: 1033513666 ns
--- End jemalloc statistics ---
Processing file (1 out of 1): /home/mouninka_mandadic/Downloads/Dgraph/sample_data.rdf
Shard tmp/map_output/000 -> Reduce tmp/shards/shard_0/000
badger 2021/05/27 16:49:42 INFO: All 0 tables opened in 0s
badger 2021/05/27 16:49:42 INFO: Discard stats nextEmptySlot: 0
badger 2021/05/27 16:49:42 INFO: Set nextTxnTs to 0
badger 2021/05/27 16:49:42 INFO: All 0 tables opened in 0s
badger 2021/05/27 16:49:42 INFO: Discard stats nextEmptySlot: 0
badger 2021/05/27 16:49:42 INFO: Set nextTxnTs to 0
badger 2021/05/27 16:49:42 INFO: DropAll called. Blocking writes...
badger 2021/05/27 16:49:42 INFO: Writes flushed. Stopping compactions now...
badger 2021/05/27 16:49:42 INFO: Deleted 0 SSTables. Now deleting value logs...
badger 2021/05/27 16:49:42 INFO: Value logs deleted. Creating value log file: 1
badger 2021/05/27 16:49:42 INFO: Deleted 1 value log files. DropAll done.
Num Encoders: 2
Final Histogram of buffer sizes: 
 -- Histogram: 
Min value: 50317 
Max value: 50317 
Mean: 50317.00 
Count: 1 
[0 B, 64 KiB) 1 100.00% 
 --

Finishing stream id: 1
Finishing stream id: 2
Finishing stream id: 3
Finishing stream id: 4
Finishing stream id: 5
Finishing stream id: 6
Finishing stream id: 7
Finishing stream id: 8
Finishing stream id: 9
Finishing stream id: 10
Finishing stream id: 11
Finishing stream id: 12
Finishing stream id: 13
Finishing stream id: 14
Finishing stream id: 15
Finishing stream id: 16
Finishing stream id: 17
badger 2021/05/27 16:49:42 INFO: Table created: 8 at level: 6 for stream: 8. Size: 257 B
badger 2021/05/27 16:49:42 INFO: Table created: 1 at level: 6 for stream: 1. Size: 255 B
Finishing stream id: 18
badger 2021/05/27 16:49:42 INFO: Table created: 2 at level: 6 for stream: 2. Size: 347 B
Finishing stream id: 19
Finishing stream id: 20
badger 2021/05/27 16:49:42 INFO: Table created: 13 at level: 6 for stream: 13. Size: 246 B
badger 2021/05/27 16:49:42 INFO: Table created: 6 at level: 6 for stream: 7. Size: 251 B
badger 2021/05/27 16:49:42 INFO: Table created: 4 at level: 6 for stream: 4. Size: 12 kB
badger 2021/05/27 16:49:42 INFO: Table created: 3 at level: 6 for stream: 3. Size: 379 B
badger 2021/05/27 16:49:42 INFO: Table created: 9 at level: 6 for stream: 9. Size: 316 B
badger 2021/05/27 16:49:42 INFO: Table created: 5 at level: 6 for stream: 5. Size: 429 B
Finishing stream id: 21
Finishing stream id: 22
Finishing stream id: 23
Finishing stream id: 24
Finishing stream id: 25
Finishing stream id: 26
badger 2021/05/27 16:49:42 INFO: Table created: 7 at level: 6 for stream: 6. Size: 430 B
badger 2021/05/27 16:49:42 INFO: Table created: 15 at level: 6 for stream: 15. Size: 249 B
badger 2021/05/27 16:49:42 INFO: Table created: 14 at level: 6 for stream: 14. Size: 243 B
badger 2021/05/27 16:49:42 INFO: Table created: 16 at level: 6 for stream: 16. Size: 268 B
badger 2021/05/27 16:49:42 INFO: Table created: 11 at level: 6 for stream: 11. Size: 303 B
Finishing stream id: 27
badger 2021/05/27 16:49:42 INFO: Table created: 12 at level: 6 for stream: 12. Size: 338 B
Finishing stream id: 28
badger 2021/05/27 16:49:42 INFO: Table created: 10 at level: 6 for stream: 10. Size: 345 B
Finishing stream id: 29
Finishing stream id: 30
Finishing stream id: 31
Finishing stream id: 32
Finishing stream id: 33
badger 2021/05/27 16:49:42 INFO: Table created: 18 at level: 6 for stream: 18. Size: 231 B
badger 2021/05/27 16:49:42 INFO: Table created: 17 at level: 6 for stream: 17. Size: 246 B
Finishing stream id: 34
Finishing stream id: 35
badger 2021/05/27 16:49:42 INFO: Table created: 19 at level: 6 for stream: 19. Size: 595 B
badger 2021/05/27 16:49:42 INFO: Table created: 20 at level: 6 for stream: 20. Size: 489 B
badger 2021/05/27 16:49:42 INFO: Table created: 21 at level: 6 for stream: 22. Size: 252 B
Finishing stream id: 36
badger 2021/05/27 16:49:42 INFO: Table created: 25 at level: 6 for stream: 24. Size: 232 B
Writing split lists back to the main DB now
badger 2021/05/27 16:49:42 INFO: copying split keys to main DB Sent data of size 0 B
badger 2021/05/27 16:49:42 INFO: Table created: 23 at level: 6 for stream: 23. Size: 253 B
badger 2021/05/27 16:49:42 INFO: Table created: 24 at level: 6 for stream: 25. Size: 251 B
badger 2021/05/27 16:49:42 INFO: Table created: 22 at level: 6 for stream: 21. Size: 488 B
badger 2021/05/27 16:49:42 INFO: Table created: 26 at level: 6 for stream: 26. Size: 336 B
badger 2021/05/27 16:49:42 INFO: Table created: 29 at level: 6 for stream: 29. Size: 257 B
badger 2021/05/27 16:49:42 INFO: Table created: 31 at level: 6 for stream: 31. Size: 247 B
badger 2021/05/27 16:49:42 INFO: Table created: 30 at level: 6 for stream: 30. Size: 2.2 kB
badger 2021/05/27 16:49:42 INFO: Table created: 27 at level: 6 for stream: 27. Size: 298 B
badger 2021/05/27 16:49:42 INFO: Table created: 28 at level: 6 for stream: 28. Size: 258 B
badger 2021/05/27 16:49:42 INFO: Table created: 32 at level: 6 for stream: 32. Size: 251 B
badger 2021/05/27 16:49:42 INFO: Table created: 33 at level: 6 for stream: 33. Size: 307 B
badger 2021/05/27 16:49:42 INFO: Table created: 34 at level: 6 for stream: 34. Size: 274 B
badger 2021/05/27 16:49:42 INFO: Table created: 35 at level: 6 for stream: 35. Size: 277 B
badger 2021/05/27 16:49:42 INFO: Table created: 36 at level: 6 for stream: 36. Size: 375 B
badger 2021/05/27 16:49:42 INFO: Table created: 37 at level: 6 for stream: 37. Size: 254 B
badger 2021/05/27 16:49:42 INFO: Resuming writes
badger 2021/05/27 16:49:42 INFO: Lifetime L0 stalled for: 0s
badger 2021/05/27 16:49:42 INFO: 
Level 0 [ ]: NumTables: 01. Size: 2.0 KiB of 0 B. Score: 0.00->0.00 Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 5 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 6 [B]: NumTables: 37. Size: 25 KiB of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level Done
badger 2021/05/27 16:49:42 INFO: Lifetime L0 stalled for: 0s
badger 2021/05/27 16:49:42 INFO: 
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 5 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 6 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level Done
[16:49:42+0530] REDUCE 00s 100.00% edge_count:1.259k edge_speed:1.259k/sec plist_count:759.0 plist_speed:759.0/sec. Num Encoding MBs: 0. jemalloc: 0 B 
Total: 00s

I guess the data has ingested properly
After this I tried to query out the data using Ratel UI, but I see no data.
Did I miss any steps in between?
What should i do too see my ingested data?

Share the steps done to spin up the cluster with the data.

You may have done it and not mentioned, but the directory structure made by the bulk loader needs to be moved around for the alpha to actually use it.

Eg: ./out/0/p is the result for a single alpha group, but the last level p should be directly in the working directory of the first group’s alpha(s)

https://dgraph.io/docs/deploy/fast-data-loading/bulk-loader/#how-to-properly-bulk-load

Thanks it worked!