It seems the best practice is to use granular properties/predicates, e.g. ‘user.name’ rather than ‘name’. And any existing RDF data imported into dgraph would have IRIs for predicates. Both scenarios could result in long property names. Do very long names impact storage requirements (it feels like a stupid question, but there is at least one popular NoSQL db where it does)?
The technical hard limit is the length of the predicate name has to fit within uint16 bytes (216 is roughly 65k) to fit as part of the key within Badger. Practically speaking, most predicates are only a word or so.
The length of a predicate name does not affect query processing, but you can imagine that very long names would mean bigger payloads to encode as query responses and send over the network.