Problems with types and big datasets

I’m using Dgraph v22.0.2 with Docker.
I have > 5 billion dgraph.type predicates. I don’t think the problem is with the function type(), it works fine for queries like this:

{
  q(func: type(SmallerType))  {
    count(uid)
  }
}

Where the types have less then 1B entries.

I think I’m facing two separate issues:

  1. Counting types with billions of entries I get:
panic serving 172.23.0.1:47114: Allocator can not allocate more than 64 buffers

See: [BUG]: Allocator can not allocate more than 64 buffers · Issue #8840 · dgraph-io/dgraph · GitHub.

  1. Schema cannot be downloaded/queried, so expand don’t work.
curl localhost:8080/query -XPOST -H "Content-Type: application/dql" -d 'schema {}'

Results in:

panic serving 172.24.0.1:45378: runtime error: slice bounds out of range [:8] with length 7

I don’t know if the two problems are related.

I doubt that the second problem is related to how I imported the data. I used a locally compiled Dgraph instance, with this fix for the bulk loader: fix(bulk): removed buffer max size by davideaimar · Pull Request #8841 · dgraph-io/dgraph · GitHub, connected to a local zero instance using the same binary.

Once bulk import was completed I moved the p folder inside a docker volume and I re-run everything from there, spawning a new zero and without sharing the zw folder.

Do you think this could be the reason of the second problem? Is there a way to re-upload the schema without having to rebuild all the indexes?