Hi @CosmicPangolin1 thanks a lot for putting your thoughts together.
However i think that there is a miss understanding here, I’m not criticizing the DGraph approach of using the GraphQL language. But what i’m criticising is the /graphql endpoint. Especially the fact that the auto-generated GraphQL compliant server doesn’t provide enough flexibility to model my own inputs / outputs and custom operations against it.
The main point is that having a client directly consuming the GraphQL API exposed by DGraph it’s just not realistically doable (see the points i mention above) if not for prototyping or for a pet project
Practically everything you suggest has outlier value and can already be engineered on top of Dgraph just as easily regardless of Dgraph hosting the minimal schema api. Of course you should be able to build an intermediate graph layer involving more business logic if you so choose
Yes, that’s completely right! Everything can be already engineered on top of DGraph.
The essence of this post is trying to understand how to best achieve this intermediate graph layer involving more business logic and maybe having this approach embedded into DGraph itself.
However i can guarantee, that you always want to have more business logic especially for validation / sanitisation, storing derivate fields etc…
If i’m go plain GraphQL+ Dgraph I would lose a lot of the automatic query plans i get with the current implementation.
I don’t disagree to the fact that the GQL schema and DGraph Schema is almost certainly a 1:1 mapping of types. I’m ok with it. The main point is to provide a way of customising the CRUD operation and add business logic around it.
On a side note, when the mapping defer you can see that you are provided with a @dgraph() directive to change that mapping.
Further, the things you advocate for separating have incremental value when built on top of GraphQL semantics. If you’re claiming that you can give me the same @auth flexibility without defining a duplicate semantic form that maps precisely to the schema…I just don’t believe you. Having the annotation in-schema gives me the confidence to deploy a single secure context that can be accessed by different apps with vastly different auth needs.
Having a @auth directive in the schema it’s not wrong per se, what is wrong is that is too opinionated.
People could argue that a @validate directive is also doable. Some people prefer doing it in code some people prefer to make it more declarative.
Image I want to use Auth0 way of authenticating and authorizing? or my custom OAuth2 (Open Connect) server deployed in my cluster?
The flexibility comes when the developer is able to chose what’s best for him on his own circumstances. I should be able to implement my own @auth directive the way I want it to work for instance.
With the current implementation i’m forced to use the @auth directive if i want auth. Probably still boils down to the fact the we need a nice way to build a surrounding server of the DGraph GraphQL endpoint or make that endpoint smarter / more flexible.
So I feel like this is separation-of-concerns gone wrong. I think a lot of the practical argument depends on imagining failure cases that Dgraph already solves for. And as a result, advocating for a migration of logic to the shoulders of the developer makes ‘engineering’ sense by following some arbitrary heuristics…but in this case I don’t think it leads to more elegant or more powerful systems. Dgraph is about user experience and maximized leverage, just as it should be.
After a few thoughts i agree with you here. The way i thought it was working initially is that, if I were going to change a field in my DGraph schema, my GraphQL schema would also changes. But apparently I can map an existing DGraph schema to GraphQL and only the GraphQL could mutate my schema if i’m not careful enough. Also it provides the @custom directive where i don’t want the schema to be mutated.
So the separation of concern in this specific case it’s not strictly necessary. It just adds complexity for little gains as you mentioned.