Adding a pbToJson function

We have two methods which handle the conversion of the response from Dgraph into the format that client expects.

  1. ToJSON (HTTP client)
  2. ToProtocolBuffer (Go and other clients)

This means that whenever any new feature (schema validation, types etc., result count, debug root) is implemented, both the functions have to be modified. What usually happens is that ToJSON is modified because it’s more commonly used and ToProtocolBuffer lags behind. This leads to inconsistencies. The idea is to write a wrapper over ToProtocolBuffer, which can convert the ProtocolBuffer response to JSON.

Add pbToJson function by pawanrawal · Pull Request #251 · dgraph-io/dgraph · GitHub is an initial attempt and produces the same output for the basic test case for ToJSON. Some modifications have to be made to make sure other tests pass. Before that just wanted to get your view on this @minions?

This is nice but we might be concerned about latency? If the benchmarks show little diff, then this seems like a clear win to me.

It’d definitely be nice if we have to maintain just one of those and the other automatically gets the changes.
But what would be the effect on performance? Some benchmark data on ToJson before and after the change with different number of result entities would give us a good idea about this.

Right now our ToProtocolBuffer method is about 2x faster than ToJSON, let me try to complete this and run the benchmarks on some large queries(which return 1000 entities).

There are both latency and memory implications. We have to convert first from FlatBuffers to proto buffers, and then from proto buffers to JSON.

1 Like

Is the speedup including the marshalling step or just going through the subgraph and creating a map and graph objects (i.e the post and pre travese method times)?

I was talking about https://open.dgraph.io/post/protobuf/. So it’s the current ToJSON vs ToProtocolBuffer. Have to do benchmarking for the new one.

Benchmark results

BenchmarkToJSON_10_Actor-4             	   50000       	     36925 ns/op       	    9768 B/op  	     154 allocs/op
BenchmarkToJSON_10_Director-4          	   50000       	     32405 ns/op       	    9240 B/op  	     144 allocs/op
BenchmarkToJSON_100_Actor-4            	   30000       	     55300 ns/op       	   14792 B/op  	     273 allocs/op
BenchmarkToJSON_100_Director-4         	    5000       	    222286 ns/op       	   64146 B/op  	    1030 allocs/op
BenchmarkToJSON_1000_Actor-4           	    1000       	   2336911 ns/op       	  700945 B/op  	    9727 allocs/op
BenchmarkToJSON_1000_Director-4        	     500       	   2851760 ns/op       	  787651 B/op  	   12862 allocs/op
BenchmarkToJSON2_10_Actor-4            	   50000       	     34604 ns/op       	    9490 B/op  	     163 allocs/op
BenchmarkToJSON2_10_Director-4         	   50000       	     28807 ns/op       	    9233 B/op  	     156 allocs/op
BenchmarkToJSON2_100_Actor-4           	   10000       	    169044 ns/op       	   47546 B/op  	     865 allocs/op
BenchmarkToJSON2_100_Director-4        	    5000       	    268915 ns/op       	   77585 B/op  	    1405 allocs/op
BenchmarkToJSON2_1000_Actor-4          	    1000       	   2471560 ns/op       	  731118 B/op  	   11623 allocs/op
BenchmarkToJSON2_1000_Director-4       	     500       	   3456367 ns/op       	 1076158 B/op  	   19471 allocs/op

The memprofile shows that a bit less memory is allocated in the new method (3100 MB vs 2800 MB)

  • ToJSON
3076.89MB of 3098.91MB total (99.29%)
Dropped 3 nodes (cum <= 15.49MB)
Showing top 10 nodes out of 40 (cum >= 27.50MB)
      flat  flat%   sum%        cum   cum%
 1526.12MB 49.25% 49.25%  1553.62MB 50.13%  github.com/dgraph-io/dgraph/query.postTraverse
  395.54MB 12.76% 62.01%   395.54MB 12.76%  reflect.mapiterinit
  291.05MB  9.39% 71.40%   929.51MB 29.99%  encoding/json.Marshal
  266.51MB  8.60% 80.00%  1515.26MB 48.90%  encoding/json.(*mapEncoder).encode
  197.65MB  6.38% 86.38%   197.65MB  6.38%  bytes.makeSlice
  146.01MB  4.71% 91.09%   199.74MB  6.45%  encoding/json.compact
      85MB  2.74% 93.84%       85MB  2.74%  reflect.unsafe_New
   83.50MB  2.69% 96.53%   521.04MB 16.81%  reflect.Value.MapKeys
      58MB  1.87% 98.40%       58MB  1.87%  encoding/json.typeEncoder
   27.50MB  0.89% 99.29%    27.50MB  0.89%  github.com/dgraph-io/dgraph/types.(*unmarshalString).FromBinary
  • ToJSON2
2804.04MB of 2872.11MB total (97.63%)
Dropped 19 nodes (cum <= 14.36MB)
Showing top 10 nodes out of 42 (cum >= 48MB)
      flat  flat%   sum%        cum   cum%
 1519.46MB 52.90% 52.90%  1519.46MB 52.90%  github.com/dgraph-io/dgraph/query.pbToJson
  261.52MB  9.11% 62.01%   261.52MB  9.11%  reflect.mapiterinit
  224.02MB  7.80% 69.81%   224.02MB  7.80%  github.com/dgraph-io/dgraph/query.glob..func1
  217.01MB  7.56% 77.37%   882.53MB 30.73%  encoding/json.(*mapEncoder).encode
  182.99MB  6.37% 83.74%   182.99MB  6.37%  bytes.makeSlice
  127.03MB  4.42% 88.16%   434.05MB 15.11%  github.com/dgraph-io/dgraph/query.(*SubGraph).preTraverse
   87.50MB  3.05% 91.21%   392.53MB 13.67%  reflect.Value.MapKeys
      86MB  2.99% 94.20%       86MB  2.99%  reflect.unsafe_New
   50.50MB  1.76% 95.96%    50.50MB  1.76%  github.com/dgraph-io/dgraph/types.(*unmarshalString).FromBinary
      48MB  1.67% 97.63%       48MB  1.67%  encoding/json.typeEncoder

For queries with more number of entities(100, 1000) we can see that JSON2(the new method) takes more ns/op and allocates more B/op. So maybe not such a good idea to go with it? Would be nice to have a second pair of eyes on the code still though before we discard it, incase I missed some obvious optimizations.

2 Likes

Good job running this via benchmarks to verify this. I’ll take a look at the code over the weekend maybe – to see if we could optimize this. I had thought about this exact problem before as well. But, kept the ToJson because it’s going to be used more often than ToProto in startups and smaller installations, and this way we keep it as fast as we possibly can.

I doubt we’d be able to squeeze out the same performance in the two-step method, of FB->ToProto->ToJSON.

1 Like

It might be quite a bit faster if we generate the JSON ourselves instead of generating the intermediate map of interface{} structures and sending to standard JSON library. The proto will in some sense take the place of this intermediate map[string]interface{}.

That could be tricky to do. Though maybe possible, if one generates strings all the way, and stitches them together somehow. Of course, that would only complicate the code even further.

I thought it is just maintaining a single bytes.Buffer and converting to string at the end? A simple tree traversal might suffice. (I am happy to give it a try if you all like.)

Might not be a single bytes.Buffer, because you’re essentially creating a tree. Going from predicate based nodes to uid based nodes. I’d say this has a low yield for us right now. So, do the lexer channel thing instead. We can get to this later.

@pawan take a look at GitHub - pquerna/ffjson: faster JSON serialization for Go. You may be able to run it on our protobuf go file and see if improves the benchmarks for ProtoBuf → JSON.

I was wrong to jump to saying “it’s just a single bytes.Buffer”.

But let me take a step back and ask: Is there any fundamental difference between the JSON and proto output such that we have to do them differently? postTraverse looks like postorder traversal; preTraverse looks like preorder traversal. Keeping the logic consistent is a bit painful as it gets more complicated. Imagine all the different combinations of options. There’s offset, count, filters, afterUID, _uid_, and more, and they can interact.

If there is no fundamental difference between JSON and proto output, it could be a single piece of code doing all the logic, talking to some interface that can produce JSON or proto or any output. To be more precise, for JSON output, this interface would produce map[string]interface{} not the final JSON string. Since there is no extra proto to map[string]interface{} conversion, I would expect this to run no slower than before (assuming pre-order works just as fast as post-order). It would lead to code simplification, improved productivity, consistency between JSON and proto outputs, etc.

All those are handled before the ToJson, or ToProto functions are called. That part is handled via process.

That depends a lot on implementation. It could be executed badly where we have more complex and fragile code, because one function is trying to do the job of two functions, or it could fit just right. It’s not clear which one it would be.

Some filtering+windowing logic might spill into postTraverse, preTraverse if we are trying hard to save memory. I will need to ask you after lunch.

Uh oh… okay, I’ll brace myself.

You are right @jchiu that they are just traversals. The difference is in how we build the outputs. Since proto output has a schema(there are nodes which have properties) vs JSON which doesn’t have any inherent schema. The output we want for JSON has attributes(e.g friends, name etc.) as keys.

Have a look at Protocol buffer vs JSON time graphs - #10 by pawan to have a look at the equivalent JSON output for protocol buffer which I generated using jsonpb package - github.com/golang/protobuf/jsonpb - Go Packages.

I definitely agree that we should have just one function and generate an intermediate structure. I couldn’t find a better way to do it apart from what I proposed here.

Revistiing this again… I’m going to take a stab at this. Thanks @pawan for all the pointers. I will keep the benchmarks in mind.

1 Like