-
Notifications
You must be signed in to change notification settings - Fork 130
feat(l1): refactor chain config (#5233) #5483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Lines of code reportTotal lines added: Detailed view |
Benchmark Results ComparisonNo significant difference was registered for any benchmark run. Detailed ResultsBenchmark Results: BubbleSort
Benchmark Results: ERC20Approval
Benchmark Results: ERC20Mint
Benchmark Results: ERC20Transfer
Benchmark Results: Factorial
Benchmark Results: FactorialRecursive
Benchmark Results: Fibonacci
Benchmark Results: FibonacciRecursive
Benchmark Results: ManyHashes
Benchmark Results: MstoreBench
Benchmark Results: Push
Benchmark Results: SstoreBench_no_opt
|
| // Check whether the tx is replay-protected | ||
| if head_tx.tx.protected() && !chain_config.is_eip155_activated(context.block_number()) { | ||
| // Ignore replay protected tx & all txs from the sender | ||
| // Pull transaction from the mempool | ||
| debug!("Ignoring replay-protected transaction: {}", tx_hash); | ||
| txs.pop(); | ||
| blockchain.remove_transaction_from_pool(&tx_hash)?; | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this refactor have anything to do with removing this piece of code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking into account the refactor, the correct thing to do here is to find another way to replace the !chain_config.is_eip155_activated(context.block_number()) part of this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reasoning is that we don't support running pre-merge blocks so this code should be unreachable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given we already broke main by changing behavior during a chain config refactor, wouldn't it be better to leave this part out and reintroduce it later separately?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part didn't break main
|
|
||
| #[repr(u8)] | ||
| #[derive(Debug, PartialEq, Eq, PartialOrd, Default, Hash, Clone, Copy, Serialize, Deserialize)] | ||
| pub enum Fork { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should move the fork type and list to another file. wdyt?
JereSalo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defaulting to Paris if none of the forks is activated may work if we suppose that there's a previous check that doesn't let anybody initialize with a pre-Merge config but I still find it kinda weird. Doesn't need any change though, just saying. It may be the best thing to do anyway
| pub fn get_blob_schedule_for_time(&self, block_timestamp: u64) -> Option<ForkBlobSchedule> { | ||
| if let Some(fork_with_current_blob_schedule) = FORKS.into_iter().rfind(|fork| { | ||
| self.get_blob_schedule_for_fork(*fork).is_some() | ||
| && self.is_fork_activated(*fork, block_timestamp) | ||
| }) { | ||
| self.get_blob_schedule_for_fork(fork_with_current_blob_schedule) | ||
| } else { | ||
| None | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be simpler?
Like can we just do let fork = self.get_fork(block_timestamp) and then self.get_blob_schedule_for_for(fork) or there may be a problem with that? I'm assuming that every new fork will have a blob schedule set to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Linking discussion from reverted PR as I made the same comment there: #5233 (comment)
Motivation
Restores #5233 by reverting #5464 + changes
get_forkbehaviour to default to Paris if the network is pre-merge (This is the change that broke Hive Daily Report tests). Also changes methodfork_activation_time_or_blocktofork_activation_timeDescription
fork_activation_time_or_blocktofork_activation_timewhich returnsNonefor pre-merge forks so we default to Paris onget_forkmethod if the network is pre-mergeResolves #4720