Thanks guys for your insight.
I had similar thoughts to yours and now I see that maybe I jumped the gun too fast and should have put more thought into this.
I wanted to build upon existing workflows and provide more flexibility, but with the current setup it’s not too easy.
The problem with the proposal I see is that while it is OK for dgraph to be embedded in the docker image (like official dgraph/dgraph), then dgraph should not embed any knowledge of docker, as it creates circular dependencies. It should be perfectly legal to run dgraph with lambda without any docker infrastructure at all
The way I made it the dependency is not really circular. The docker within alpha would be completely independent of the outside world. Yet I came to the conclusion that the whole idea of deploying lambdas directly on alpha is a bit weird and managing containers is the task of a external orchestration unit and not of the alpha server.
Why do I think it’s weird? This leads to alpha and lambda fighting for resources. If you do work heavy tasks on lambda you would have to scale the whole alpha just to deploy more resources for lambda.
Personally I run dgraph in kubernetes and having external and separately scalable and manageable service to handle lambda execution seemed idiomatic where dgraph exec’ing nodejs N times does not. (But somehow wasmer does feel better)
I think this sums it up.
So in the end there are only two choices:
- Run Lambda Scripts directly on Alpha and keep tasks lightweight, ideally with WASM.
- Run external lambda server and deploy it as close to alphas as possible using external orchestration like Kubernetes.
Ideally we could do both and decide which lambda servers take care of which tasks.
So I already started and will continue work on a WASM integration.
Guys I want to thank you again for your thoughts, it really helps me a lot! I learn a lot through these discussions and it’s easier to see actual needs.