-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Broker new price adapter #4521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broker new price adapter #4521
Conversation
not on number of cores. I have to think a bit more about this, but I believe we will arrive have been sold at almost the base price
The old type was very misleading as an entry in `AllowedRenewals` does not mean that a renewal is actually allowed.
Overkillus
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some questions and notes here and there but overall looking solid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR includes three separate pallet-level changes as I see it:
- Giving more scope to the PriceAdapter by removing restrictive logic from the pallet and allowing it to dictate the renewal price
- Changing
adapt_priceto only consider price changes - Renaming AllowedRenewals to reflect its actual implementation
-
I am fully behind. Although I think that we should be adding more information to
SalePerformance, rather than stripping it back to only the price. Including other metrics from the sale like cores offered and cores sold would achieve this without committing to 2. -
I am not convinced by this approach, I think the new model that you're proposing here weakens the price finding mechanism in cases where we do not sell out, and would increase the time for the price to respond to market fluctuations -- both internal (demand for and perceived value of coretime) and external (DOT-FIAT volatility).
Even if I'm wrong, I think the work in 1 would allow more flexibility in thePriceAdapterwould allow experimentation in the implementation without committing to it at the pallet level. -
Good point, maybe we should reconsider where the completeness check happens. The check for a complete mask for a given workplan and full duration should probably happen before we put it in the
AllowedRenewalsat all, leaving it as a definitive list of cores that can be renewed. I think that's useful information to have in state.
|
|
||
| /// Performance of a past sale. | ||
| #[derive(Copy, Clone)] | ||
| pub struct SalePerformance<Balance> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that relying only on the base price and last sold price loses too much information about the market conditions.
To take an example even if we ignore the implementation of the price adapter: with this model if we were to offer 1 core and sell out immediately, we end up with the same outcome from the price adapter as if we were to offer 50 cores and sell out immediately, where clearly in the second case we have a much stronger indicator of a very high demand at that price (or at least we are more certain that the last price paid for the core was not an outlier), which should result in a stronger price correction signal.
The converse is also true, if we were to offer 1 core and not sell it then we end up with the base price, but there was no demand, whereas if we were to offer 50 cores and sell 49, with the last being at the fixed price we get the same price correction signal even though there was clearly a lot of demand, we just had slightly too much supply. Them both having the same correction in the next sale would be incorrect.
I think we should be aiming to include more information in the price adapter, not less.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the new model is a step in the right direction, but now we should increase the number of data points. Instead of tracking only the most recent sellout value potentially track X recent sales and do a weighed average of them with recency being the weight.
Such approaches can remove the unnecessary price and target restrictions while allowing for multiple data points which cannot be easily manipulated with individual purchases. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea that's almost exactly what my hackmd about the pricing iteration talks about.
With a weighted average you can weight towards newer prices for the regions on the open market (to get better sensitivity to market forces), and weight towards the older data for renewals
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean just about the weighted average, but still using the full information of a sale (cores offered and sold contributes to the weighting)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pretty convinced that the metric we actually want is price. To cover your examples better @seadanda we could indeed use some average. Then your scenario where 49 cores had been sold actually rather expensive and only 1 was cheap, the price correction would be accordingly.
For the first example. Adding the core count would not really help. If you only offered one core and it got sold out at the highest price, all you know that the offered cores have been sold out. You don't know what would have happened if you offered more. The best way to fix this is to have a high lead in range, that can not be manipulated too easily.
The nice thing about using the lowest price is that it can not easily be manipulated. Using a weighted average could also work. @Overkillus do you have a particular scheme in mind?
Another way to fix this would be to allow for more than doubling of the upper range. Then even in @seadanda example, where we would adjust drastically downwards, if the upper end of the leadin would still be 10x instead of 2x of the found price, it would likely not be an issue at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I think more of it, I am not sure this is a real problem: It seems quite unrealistic that 49 cores are sold at a really high price and the last core at the minimum, but even in that extreme scenario the next sale will start at 2x that price.
Given that absolutely nobody wanted that last core, that 2x should be sufficient to reflect market well enough.
But in general handling a spike in supply (up and down) will obviously affect price. With the currently implemented model we can handle those very well if the go downwards and okayish when they go up - we adjust only 2x. But the only risk here would be to sell cores "too cheaply" for a short while. Assuming the price spike was just because of a short shortage on supply, I would argue that the moderate price increase is actually a good thing.
We could make it symmetric: 10x up and 10x down from the target price - I actually already have that prepared, with the curve being adjusted so the optimal price is still in the middle time wise.
And in fact this might be better, it would still be expensive to do price manipulation and the system works against you. E.g. if at one sale only 1 core was offered and you bought it at the maximum price, the base price in the next sale would be that price/10 - which is exactly the target price of the previous sale. So actually no harm done.
Yeah. I think that change is actually good.
h = 10 * o o = 10 * b
|
The CI pipeline was cancelled due to failure one of the required jobs. |
|
@eskimor https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/6338420 was started for your command Comment |
…=dev --target_dir=substrate --pallet=pallet_broker
|
@eskimor Command |
seadanda
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet, let's get it merged. Future models can add anything more general that is needed.
…o rk-broker-new-price-adapter
Fixes #4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
* master: (93 commits) Fix broken windows build (#4636) Beefy client generic on aduthority Id (#1816) pallet-staking: Put tests behind `cfg(debug_assertions)` (#4620) Broker new price adapter (#4521) Change `XcmDryRunApi::dry_run_extrinsic` to take a call instead (#4621) Update README.md (#4623) Publish `chain-spec-builder` (#4518) Add omni bencher & chain-spec-builder bins to release (#4557) Moves runtime macro out of experimental flag (#4249) Filter workspace dependencies in the templates (#4599) parachain-inherent: Make `para_id` more prominent (#4555) Add metric to measure the time it takes to gather enough assignments (#4587) Improve On_demand_assigner events (#4339) Conditional `required` checks (#4544) [CI] Deny adding git deps (#4572) [subsytem-bench] Remove redundant banchmark_name param (#4540) Add availability-recovery from systematic chunks (#1644) Remove workspace lints from templates (#4598) `sc-chain-spec`: deprecated code removed (#4410) [subsystem-benchmarks] Add statement-distribution benchmarks (#3863) ...
Only the price adapter changes are backported to avoid out-of-order migrations
Only the price adapter changes are backported to avoid out-of-order migrations
Only the price adapter changes are backported to avoid out-of-order migrations
* master: (106 commits) [ci] Delete unused flow (#4676) Fix umbrella CI check and fix the C&P message (#4670) Add Dockerfiles to the templates (#4637) Revamp the Readme of the minimal template (#4649) Add chain-spec-builder docker image (#4655) Update Amforc bootnodes for Kusama and Polkadot (#4668) make all storage items in parachain-system public (#4645) [Pools] Refactors and runtime apis for DelegateStake (#4537) update amforc westend and its parachain bootnodes (#4641) Better error for missing index in CRV2 (#4643) Implement `XcmPaymentApi` and `DryRunApi` on all system parachains (#4634) Use Unlicense for templates (#4628) collator-protocol: remove `elastic-scaling-experimental` feature (#4595) Update `runtime_type` ref doc with the new "Associated Type Bounds" (#4624) Adds ability to specify chain type in chain-spec-builder (#4542) Fix broken windows build (#4636) Beefy client generic on aduthority Id (#1816) pallet-staking: Put tests behind `cfg(debug_assertions)` (#4620) Broker new price adapter (#4521) Change `XcmDryRunApi::dry_run_extrinsic` to take a call instead (#4621) ...
Related to #4521 and #4656 This allows a release of the Broker pallet so the new price adapter can be tested on Kusama ASAP (polkadot-fellows/runtimes#334).
Fixes paritytech#4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
Fixes paritytech#4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
Fixes paritytech#4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
Fixes #4360
Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in
AllowedRenewalsis not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.)Edit 1:
(Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715)