Skip to content
This repository was archived by the owner on Nov 15, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions node/core/candidate-validation/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -501,7 +501,7 @@ fn validate_candidate_exhaustive<B: ValidationBackend, S: SpawnNamed + 'static>(
mod tests {
use super::*;
use polkadot_node_subsystem_test_helpers as test_helpers;
use polkadot_primitives::v1::{HeadData, BlockData};
use polkadot_primitives::v1::{HeadData, BlockData, UpwardMessage};
use sp_core::testing::TaskExecutor;
use futures::executor;
use assert_matches::assert_matches;
Expand Down Expand Up @@ -847,7 +847,7 @@ mod tests {

assert_matches!(v, ValidationResult::Valid(outputs, used_validation_data) => {
assert_eq!(outputs.head_data, HeadData(vec![1, 1, 1]));
assert_eq!(outputs.upward_messages, Vec::new());
assert_eq!(outputs.upward_messages, Vec::<UpwardMessage>::new());
assert_eq!(outputs.new_validation_code, Some(vec![2, 2, 2].into()));
assert_eq!(used_validation_data, validation_data);
});
Expand Down
38 changes: 1 addition & 37 deletions parachain/src/primitives.rs
Original file line number Diff line number Diff line change
Expand Up @@ -186,44 +186,8 @@ impl<T: Encode + Decode + Default> AccountIdConversion<T> for Id {
}
}

/// Which origin a parachain's message to the relay chain should be dispatched from.
#[derive(Clone, PartialEq, Eq, Encode, Decode)]
#[cfg_attr(feature = "std", derive(Debug, Hash))]
#[repr(u8)]
pub enum ParachainDispatchOrigin {
/// As a simple `Origin::Signed`, using `ParaId::account_id` as its value. This is good when
/// interacting with standard modules such as `balances`.
Signed,
/// As the special `Origin::Parachain(ParaId)`. This is good when interacting with parachain-
/// aware modules which need to succinctly verify that the origin is a parachain.
Parachain,
/// As the simple, superuser `Origin::Root`. This can only be done on specially permissioned
/// parachains.
Root,
}

impl sp_std::convert::TryFrom<u8> for ParachainDispatchOrigin {
type Error = ();
fn try_from(x: u8) -> core::result::Result<ParachainDispatchOrigin, ()> {
const SIGNED: u8 = ParachainDispatchOrigin::Signed as u8;
const PARACHAIN: u8 = ParachainDispatchOrigin::Parachain as u8;
Ok(match x {
SIGNED => ParachainDispatchOrigin::Signed,
PARACHAIN => ParachainDispatchOrigin::Parachain,
_ => return Err(()),
})
}
}

/// A message from a parachain to its Relay Chain.
#[derive(Clone, PartialEq, Eq, Encode, Decode)]
#[cfg_attr(feature = "std", derive(Debug, Hash))]
pub struct UpwardMessage {
/// The origin for the message to be sent from.
pub origin: ParachainDispatchOrigin,
/// The message data.
pub data: Vec<u8>,
}
pub type UpwardMessage = Vec<u8>;

/// Validation parameters for evaluating the parachain validity function.
// TODO: balance downloads (https://github.com/paritytech/polkadot/issues/220)
Expand Down
2 changes: 1 addition & 1 deletion primitives/src/v0.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ pub use polkadot_core_primitives::*;
pub use parity_scale_codec::Compact;

pub use polkadot_parachain::primitives::{
Id, ParachainDispatchOrigin, LOWEST_USER_ID, UpwardMessage, HeadData, BlockData,
Id, LOWEST_USER_ID, UpwardMessage, HeadData, BlockData,
ValidationCode,
};

Expand Down
3 changes: 1 addition & 2 deletions primitives/src/v1.rs
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,7 @@ pub use polkadot_core_primitives::v1::{

// Export some polkadot-parachain primitives
pub use polkadot_parachain::primitives::{
Id, ParachainDispatchOrigin, LOWEST_USER_ID, UpwardMessage, HeadData, BlockData,
ValidationCode,
Id, LOWEST_USER_ID, UpwardMessage, HeadData, BlockData, ValidationCode,
};

// Export some basic parachain primitives from v0.
Expand Down
26 changes: 17 additions & 9 deletions roadmap/implementers-guide/src/messaging.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,20 +26,28 @@ The downward message queue doesn't have a cap on its size and it is up to the re
that prevent spamming in place.

Upward Message Passing (UMP) is a mechanism responsible for delivering messages in the opposite direction:
from a parachain up to the relay chain. Upward messages can serve different purposes and can be of different
kinds.
from a parachain up to the relay chain. Upward messages are essentially byte blobs. However, they are interpreted
by the relay-chain according to the XCM standard.

One kind of message is `Dispatchable`. They could be thought of similarly to extrinsics sent to a relay chain: they also
invoke exposed runtime entrypoints, they consume weight and require fees. The difference is that they originate from
a parachain. Each parachain has a queue of dispatchables to be executed. There can be only so many dispatchables at a time.
The XCM standard is a common vocabulary of messages. The XCM standard doesn't require a particular interpretation of
a message. However, the parachains host (e.g. Polkadot) guarantees certain semantics for those.

Moreover, while most XCM messages are handled by the on-chain XCM interpreter, some of the messages are special
cased. Specifically, those messages can be checked during the acceptance criteria and thus invalid
messages would lead to rejecting the candidate itself.

One kind of such a message is `Xcm::Transact`. This upward message can be seen as a way for a parachain
to execute arbitrary entrypoints on the relay-chain. `Xcm::Transact` messages resemble regular extrinsics with the exception that they
originate from a parachain.

The payload of `Xcm::Transact` messages is referred as to `Dispatchable`. When a candidate with such a message is enacted
the dispatchables are put into a queue corresponding to the parachain. There can be only so many dispatchables in that queue at once.
The weight that processing of the dispatchables can consume is limited by a preconfigured value. Therefore, it is possible
that some dispatchables will be left for later blocks. To make the dispatching more fair, the queues are processed turn-by-turn
in a round robin fashion.

Upward messages are also used by a parachain to request opening and closing HRMP channels (HRMP will be described below).

Other kinds of upward messages can be introduced in the future as well. Potential candidates are
new validation code signalling, or other requests to the relay chain.
The second category of special cased XCM messages are for horizontal messaging channel management,
namely messages meant to request opening and closing HRMP channels (HRMP will be described below).

## Horizontal Message Passing

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ Included: Option<()>,
1. Invoke `Scheduler::schedule(freed)`
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
1. Call the `Router::process_pending_upward_dispatchables` routine to execute all messages in upward dispatch queues.
1. Call the `Router::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
1. If all of the above succeeds, set `Included` to `Some(())`.
144 changes: 80 additions & 64 deletions roadmap/implementers-guide/src/runtime/router.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,36 @@ OutgoingParas: Vec<ParaId>;
### Upward Message Passing (UMP)

```rust
/// Dispatchable objects ready to be dispatched onto the relay chain. The messages are processed in FIFO order.
RelayDispatchQueues: map ParaId => Vec<(ParachainDispatchOrigin, RawDispatchable)>;
/// The messages waiting to be handled by the relay-chain originating from a certain parachain.
///
/// Note that some upward messages might have been already processed by the inclusion logic. E.g.
/// channel management messages.
///
/// The messages are processed in FIFO order.
RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`.
///
/// First item in the tuple is the count of messages and second
/// is the total length (in bytes) of the message payloads.
///
/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of
/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of
/// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of
/// loading the whole message queue if only the total size and count are required.
RelayDispatchQueueSize: map ParaId => (u32, u32);
///
/// Invariant:
/// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`.
RelayDispatchQueueSize: map ParaId => (u32, u32); // (num_messages, total_bytes)
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
///
/// Invariant:
/// - The set of items from this vector should be exactly the set of the keys in
/// `RelayDispatchQueues` and `RelayDispatchQueueSize`.
NeedsDispatch: Vec<ParaId>;
/// This is the para that will get dispatched first during the next upward dispatchable queue
/// This is the para that gets dispatched first during the next upward dispatchable queue
/// execution round.
///
/// Invariant:
/// - If `Some(para)`, then `para` must be present in `NeedsDispatch`.
NextDispatchRoundStartWith: Option<ParaId>;
```

Expand Down Expand Up @@ -156,36 +172,9 @@ No initialization routine runs for this module.
Candidate Acceptance Function:

* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`):
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks each upward message `M` individually depending on its kind:
1. If the message kind is `Dispatchable`:
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the message (NOTE that should include all processed
upward messages of the `Dispatchable` kind up to this point!)
1. If the message kind is `HrmpInitOpenChannel(recipient, max_places, max_message_size)`:
1. Check that the `P` is not `recipient`.
1. Check that `max_places` is less or equal to `config.hrmp_channel_max_places`.
1. Check that `max_message_size` is less or equal to `config.hrmp_channel_max_message_size`.
1. Check that `recipient` is a valid para.
1. Check that there is no existing channel for `(P, recipient)` in `HrmpChannels`.
1. Check that there is no existing open channel request (`P`, `recipient`) in `HrmpOpenChannelRequests`.
1. Check that the sum of the number of already opened HRMP channels by the `P` (the size
of the set found `HrmpEgressChannelsIndex` for `P`) and the number of open requests by the
`P` (the value from `HrmpOpenChannelRequestCount` for `P`) doesn't exceed the limit of
channels (`config.hrmp_max_parachain_outbound_channels` or `config.hrmp_max_parathread_outbound_channels`) minus 1.
1. Check that `P`'s balance is more or equal to `config.hrmp_sender_deposit`
1. If the message kind is `HrmpAcceptOpenChannel(sender)`:
1. Check that there is an existing request between (`sender`, `P`) in `HrmpOpenChannelRequests`
1. Check that it is not confirmed.
1. Check that `P`'s balance is more or equal to `config.hrmp_recipient_deposit`.
1. Check that the sum of the number of inbound HRMP channels opened to `P` (the size of the set
found in `HrmpIngressChannelsIndex` for `P`) and the number of accepted open requests by the `P`
(the value from `HrmpAcceptedChannelRequestCount` for `P`) doesn't exceed the limit of channels
(`config.hrmp_max_parachain_inbound_channels` or `config.hrmp_max_parathread_inbound_channels`)
minus 1.
1. If the message kind is `HrmpCloseChannel(ch)`:
1. Check that `P` is either `ch.sender` or `ch.recipient`
1. Check that `HrmpChannels` for `ch` exists.
1. Check that `ch` is not in the `HrmpCloseChannelRequests` set.
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks that no message exceeds `config.max_upward_message_size`.
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages
* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`:
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.
Expand Down Expand Up @@ -222,58 +211,85 @@ Candidate Enactment:
* `prune_dmq(P: ParaId, processed_downward_messages)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.
* `enact_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process all upward messages in order depending on their kinds:
1. If the message kind is `Dispatchable`:
1. Process each upward message `M` in order:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.
1. If the message kind is `HrmpInitOpenChannel(recipient, max_places, max_message_size)`:
1. Increase `HrmpOpenChannelRequestCount` by 1 for `P`.
1. Append `(P, recipient)` to `HrmpOpenChannelRequestsList`.
1. Add a new entry to `HrmpOpenChannelRequests` for `(sender, recipient)`
1. Set `sender_deposit` to `config.hrmp_sender_deposit`
1. Set `limit_used_places` to `max_places`
1. Set `limit_message_size` to `max_message_size`
1. Set `limit_used_bytes` to `config.hrmp_channel_max_size`
1. Reserve the deposit for the `P` according to `config.hrmp_sender_deposit`
1. If the message kind is `HrmpAcceptOpenChannel(sender)`:
1. Reserve the deposit for the `P` according to `config.hrmp_recipient_deposit`
1. For the request in `HrmpOpenChannelRequests` identified by `(sender, P)`, set `confirmed` flag to `true`.
1. Increase `HrmpAcceptedChannelRequestCount` by 1 for `P`.
1. If the message kind is `HrmpCloseChannel(ch)`:
1. If not already there, insert a new entry `Some(())` to `HrmpCloseChannelRequests` for `ch`
and append `ch` to `HrmpCloseChannelRequestsList`.

The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called.

`schedule_para_cleanup(ParaId)`:
1. Add the para into the `OutgoingParas` vector maintaining the sorted order.

The following routine is meant to execute pending entries in upward dispatchable queues. This function doesn't fail, even if
any of dispatchables return an error.
The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if
dispatcing any of individual upward messages returns an error.

`process_pending_upward_dispatchables()`:
`process_pending_upward_messages()`:
1. Initialize a cumulative weight counter `T` to 0
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Dequeue `D` the first dispatchable `D` from `RelayDispatchQueues` for `P`
1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. Decode `D` into a dispatchable. Otherwise, if succeeded:
1. If `weight_of(D) > config.dispatchable_upward_message_critical_weight` then skip the dispatchable. Otherwise:
1. Execute `D` and add the actual amount of weight consumed to `T`.
1. If `weight_of(D) + T > config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these dispatchables.
1. Delegate processing of the message to the runtime. The weight consumed is added to `T`.
1. If `T >= config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`.
1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these upward messages.

Utility routines.

`queue_downward_message(P: ParaId, M: DownwardMessage)`:
1. Check if the serialized size of `M` exceeds the `config.critical_downward_message_size`. If so, return an error.
1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error.
1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`.
1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash.
1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`.

## Entry-points

The following entry-points are meant to be used for HRMP channel management.

Those entry-points are meant to be called from a parachain. `origin` is defined as the `ParaId` of
the parachain executed the message.

* `hrmp_init_open_channel(recipient, max_places, max_message_size)`:
1. Check that the `origin` is not `recipient`.
1. Check that `max_places` is less or equal to `config.hrmp_channel_max_places` and greater than zero.
1. Check that `max_message_size` is less or equal to `config.hrmp_channel_max_message_size` and greater than zero.
1. Check that `recipient` is a valid para.
1. Check that there is no existing channel for `(origin, recipient)` in `HrmpChannels`.
1. Check that there is no existing open channel request (`origin`, `recipient`) in `HrmpOpenChannelRequests`.
1. Check that the sum of the number of already opened HRMP channels by the `origin` (the size
of the set found `HrmpEgressChannelsIndex` for `origin`) and the number of open requests by the
`origin` (the value from `HrmpOpenChannelRequestCount` for `origin`) doesn't exceed the limit of
channels (`config.hrmp_max_parachain_outbound_channels` or `config.hrmp_max_parathread_outbound_channels`) minus 1.
1. Check that `origin`'s balance is more or equal to `config.hrmp_sender_deposit`
1. Reserve the deposit for the `origin` according to `config.hrmp_sender_deposit`
1. Increase `HrmpOpenChannelRequestCount` by 1 for `origin`.
1. Append `(origin, recipient)` to `HrmpOpenChannelRequestsList`.
1. Add a new entry to `HrmpOpenChannelRequests` for `(origin, recipient)`
1. Set `sender_deposit` to `config.hrmp_sender_deposit`
1. Set `limit_used_places` to `max_places`
1. Set `limit_message_size` to `max_message_size`
1. Set `limit_used_bytes` to `config.hrmp_channel_max_size`
* `hrmp_accept_open_channel(sender)`:
1. Check that there is an existing request between (`sender`, `origin`) in `HrmpOpenChannelRequests`
1. Check that it is not confirmed.
1. Check that the sum of the number of inbound HRMP channels opened to `origin` (the size of the set
found in `HrmpIngressChannelsIndex` for `origin`) and the number of accepted open requests by the `origin`
(the value from `HrmpAcceptedChannelRequestCount` for `origin`) doesn't exceed the limit of channels
(`config.hrmp_max_parachain_inbound_channels` or `config.hrmp_max_parathread_inbound_channels`)
minus 1.
1. Check that `origin`'s balance is more or equal to `config.hrmp_recipient_deposit`.
1. Reserve the deposit for the `origin` according to `config.hrmp_recipient_deposit`
1. For the request in `HrmpOpenChannelRequests` identified by `(sender, P)`, set `confirmed` flag to `true`.
1. Increase `HrmpAcceptedChannelRequestCount` by 1 for `origin`.
* `hrmp_close_channel(ch)`:
1. Check that `origin` is either `ch.sender` or `ch.recipient`
1. Check that `HrmpChannels` for `ch` exists.
1. Check that `ch` is not in the `HrmpCloseChannelRequests` set.
1. If not already there, insert a new entry `Some(())` to `HrmpCloseChannelRequests` for `ch`
and append `ch` to `HrmpCloseChannelRequestsList`.

## Session Change

1. Drain `OutgoingParas`. For each `P` happened to be in the list:
Expand Down
Loading