Since I quite often come across deep mutations in my applications, I recently tried to find some (more or less) solutions for it. The most simple, although not general, approach is using webhooks.
Assume the schema
interface Asset {
id: ID!
name: String!
assetType: AssetType!
}
union AssetType = Internal | External
type Internal {
id: ID!
src: String!
}
type External {
id: ID!
url: String!
}
type CourseAsset implements Asset @lamdaOnMutate(add: false, update: true, delete: false) {
id: ID!
courseRef: Course!
# from Asset
# name: String!
# assetType: AssetType!
}
then we can add a webhook (minified example)
const updateAssetWebhook = async ({event, graphql, dql, authHeader}) => {
// get set from event object
const assetType = Object.keys(event.update.setPatch.assetType)[0];
const { id, ...set } = event.update.setPatch.assetType[assetType]
// run mutation
await graphql(`
mutation Asset($id: ID!, $set: SetAssetPatch!) {
update${assetType[0].toUpperCase()}${assetType.slice(1, -3)}Mutation(
input: {
filter: { id: $id }
set: $set
}
)
}
`, {set: set})
}
self.addWebHookResolvers({
"CourseAsset.update": updateAssetWebhook,
});
In my frontend, I’d run the following query
mutation UpdateCA {
updateCourseAssetMutation(
input: {
filter: { id: "0x1" },
set: {
name: "Updated Asset",
assetType: {
internalRef: {
id: "0x2",
src: "path-to-new-src"
}
}
}
}
) {
courseAsset {
name
assetType {
...on Internal {
src
}
}
}
}
}
This works and also my deep mutation updates the correct node without creating a new one. However, since webhooks run after the original request, the return value for, in this case, Internal
is still the old one.
Therefore my question: what is the intention of webhooks? Since we have access to all that data and are allowed to run graphql
and/or dql
mutations, we could definitely alter data from the original mutation. Would it not be wise to only commit the original mutation after the webhook has been successfully executed? Also errors cannot be handled since the frontend will never know about them.