-
Notifications
You must be signed in to change notification settings - Fork 10
Description
So let's say we implement #23
The question is how? I'd like to propose we consider implementing it as a shim/sidecar server running along the rollup and the sugondat-node, instead of implementing it as part of the RPC of sugondat-node.
There are several facets, let's go through each of them.
UX/DX
Yes, that would mean that the user will need to run another thing. IMO it's fine. I agree it would be better to run a single binary, but we are already past this: in the minimal deployment the rollup user would run the rollup node and the sugondat node as well. This already warrants using something like docker-compose. More realistically, the user will run some monitoring tools as well in a typical deployment. Adding another app here doesn't feel like much difference.
In a development environment, the dev would need to run the rollup node, sugondat-node and perhaps one or more polkadot validators, which already kinda assumes there would be some orchestration tooling (zombienet or docker-compose).
However, having a shim may improve the experience though. Imagine that we could provide the sugondat-shim simulate --data /tmp where instead of depending on a full fledged sugondat environment, it would simulate a DA layer. I expect this would improve the DX significantly.
Key management
There is a problem right now: #22 we completely ignore the transaction signing aspect of blob submission in adapters. As I alluded to in #23 I think it's worthwhile to shift the complexity into the common API layer and make the adapters dumb. This would fit perfectly into the shim set of responsibilities.
E.g. when running in non-simulation mode, the user would be able to specify: sugondat-shim --submit-private-key=/var/secrets/my_key (or sugondat-shim --submit-dev-alice to preserve the existing behavior) and that would enable the blob submission endpoint.
Flexibility
We would decouple running sugondat-node from the rollup client. That:
- enables users to pick whether to run node locally or point to some, potentially public, remote endpoint.
- allows us to embed the light node later on.
- gives us a point of integration into the users machines. We could add some caching, or maybe promote the node to do some stuff, e.g. portal network like stuff but for blobs.
Shortcomings of Substrate RPC
Funny thing, but this approach doesn't address the issue we discussed in slack: we still have to request the full blocks. This is fine for the initial time: for local use-case it works, for the remote use-case it's worse, but in the future, sugondat-node should expose more efficient.
Embedding
In case embedding is needed badly, it will be possible to arrange that anyway at the relatively low cost through a hourglass pattern. This is how we can achieve that. We link the shim to the adapter directly. The shim publishes a very slim FFI API that configures and sets up a server and some FFI functions to send a message to the server and receive a result, very much like in-process HTTP server (although the API would be more complex in case websocket/JSON-RPC are used for the shim transport).