-
Notifications
You must be signed in to change notification settings - Fork 257
Pre-execute the transaction and execute the block in multi-thread #4616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dag-master
Are you sure you want to change the base?
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughAdds batch account operations (sign/unlock) across API, services, RPC, and client; refactors txpool interfaces from header-based to state-root-based with batch sequence APIs; overhauls miner template creation to an async, event-driven pipeline; adjusts chain execution/verification and OpenedBlock state sharing; updates sync execution to async; modifies constants and utilities. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Miner as MinerService
participant CBS as CreateBlockTemplate Service
participant Proc as ProcessTransaction (async)
participant Ch as Channel
participant Bus as Event Bus
Miner->>CBS: notify(BlockTemplateRequest)
CBS->>Proc: spawn process(header, state_root, txs)
Proc-->>Ch: send(ProcessHeaderTemplate)
CBS->>CBS: periodic loop consumes channel
CBS-->>Bus: broadcast(BlockTemplateResponse)
Bus-->>Miner: BlockTemplateResponse event
Miner->>Miner: compute gap / skip check
Miner-->>Bus: DispatchMintBlockTemplate (if not skipped)
sequenceDiagram
autonumber
participant Client as RpcClient
participant RPC as AccountRpc (Server)
participant Svc as AccountAsyncService
participant Acc as AccountService
participant Mgr as AccountManager
Client->>RPC: account.sign_txn_in_batch(raw_txns)
RPC->>Svc: sign_txn_in_batch(raw_txns)
Svc->>Acc: AccountRequest::SignTxnInBatch
Acc->>Mgr: sign_txn_in_batch(txns)
Mgr-->>Acc: Vec<SignedUserTransaction>
Acc-->>Svc: AccountResponse::SignedTxnList
Svc-->>RPC: Vec<SignedUserTransaction>
RPC-->>Client: Vec<SignedUserTransaction)
Client->>RPC: account.unlock_in_batch(batch, duration)
RPC->>Svc: unlock_account_in_batch(batch, dur)
Svc->>Acc: AccountRequest::UnlockAccountInBatch
Acc->>Mgr: unlock_account_in_batch(batch, dur)
Mgr-->>Acc: Vec<AccountInfo>
Acc-->>Svc: AccountResponse::AccountList
Svc-->>RPC: Vec<AccountInfo>
RPC-->>Client: Vec<AccountInfo>
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120–180 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cmd/tx-factory/src/main.rs (1)
517-560: Fix potential infinite loop on account creation failure.
continue;bypassesi += 1, risking a tight infinite loop ifaccount_createkeeps failing.Apply this diff to ensure forward progress:
let account = match self.client.account_create(self.account_password.clone()) { Ok(account) => account, Err(e) => { error!("create account error: {}", e); - continue; + i += 1; + continue; } }; @@ - i += 1; + i += 1;
🧹 Nitpick comments (52)
vm/vm-runtime/src/starcoin_vm.rs (2)
1667-1676: Reduce noise and add useful context to concurrency logs; also time the parallel path.Logging the same message before and after execution at different levels is redundant and will spam logs per block at info level. Suggest: downgrade the pre-execution log to debug, include txn_count, and add an elapsed time in the post-execution log. This gives actionable signals without flooding prod logs. Also consider that
set_concurrency_level_oncealready logs at info when the level is set.Apply this diff within this block:
@@ - if concurrency_level > 1 { - info!("TurboSTM executor concurrency_level {}", concurrency_level); + if concurrency_level > 1 { + let txn_count = transactions.len(); + let start = Instant::now(); + debug!( + "TurboSTM parallel execute start: concurrency_level={}, txns={}", + concurrency_level, txn_count + ); let (result, _) = crate::parallel_executor::ParallelStarcoinVM::execute_block( transactions, state_view, concurrency_level, block_gas_limit, metrics, )?; - debug!("TurboSTM executor concurrency_level {}", concurrency_level); + debug!( + "TurboSTM parallel execute end: concurrency_level={}, txns={}, elapsed_ms={}", + concurrency_level, + txn_count, + start.elapsed().as_millis() + ); Ok(result)And add the import at the top of the file (outside this hunk):
use std::time::Instant;
1675-1675: Differentiate the post-execution log message or remove it.As written, this debug message repeats the same static text as the pre-execution log, providing little value. If you don’t adopt the timing suggestion above, at least change it to indicate completion, e.g., “parallel execute finished,” and include txn count.
- debug!("TurboSTM executor concurrency_level {}", concurrency_level); + debug!("TurboSTM parallel execute finished; concurrency_level={}, txns={}", concurrency_level, /*transactions.len()*/ 0)Note: you’ll need to capture
transactions.len()before it’s moved intoexecute_block(or keep a separate count) to populate the placeholder.config/src/genesis_config.rs (3)
757-758: Consensus parameters bumped: verify invariants and throughput targets before merging
- G_BASE_BLOCK_GAS_LIMIT increased 10x to 500,040,000; several networks multiply this by 10 again (e.g., Test/Dev/Halley/DagTest), yielding 5,000,400,000 gas per block. Combined with a 1s base_block_time_target, this is a material change that can impact proposer latency, execution time, and fork rate.
- G_MAX_TRANSACTION_PER_BLOCK reduced from 700 to 400. Ensure block construction logic uses the minimum of {gas budget, tx-count cap} and that txpool batching, RPC, and miner template generation are consistent with this new cap.
Please verify:
- Block building still meets the 1s target under typical and worst-case gas usage with multi-thread execution enabled.
- txpool, executor, and miner template code paths have no hard-coded assumptions (e.g., 700) and correctly read the new constant.
- The invariant “block_gas_limit > maximum_number_of_gas_units (per-tx)” holds for all gas constant versions used by the networks, with a non-trivial safety margin.
Add a lightweight unit test to enforce the invariant and avoid future regressions. You can append this to the existing tests module in this file:
#[test] fn test_block_gas_limit_exceeds_tx_gas_cap_for_all_schedules() { // Ensure base block gas limit is strictly greater than per-tx max across all schedules. // Safety margin is expected; adjust as needed if schedules change. let v1_max = G_GAS_CONSTANTS_V1.maximum_number_of_gas_units; let v2_max = G_GAS_CONSTANTS_V2.maximum_number_of_gas_units; let v3_max = G_GAS_CONSTANTS_V3.maximum_number_of_gas_units; let test_max = G_TEST_GAS_CONSTANTS.maximum_number_of_gas_units; assert!(G_BASE_BLOCK_GAS_LIMIT > v1_max, "block gas limit must exceed V1 tx max"); assert!(G_BASE_BLOCK_GAS_LIMIT > v2_max, "block gas limit must exceed V2 tx max"); assert!(G_BASE_BLOCK_GAS_LIMIT > v3_max, "block gas limit must exceed V3 tx max"); assert!(G_BASE_BLOCK_GAS_LIMIT > test_max, "block gas limit must exceed TEST tx max"); // Also verify the networks that apply *10 multipliers. let multiplied = G_BASE_BLOCK_GAS_LIMIT.saturating_mul(10); assert!(multiplied > v1_max && multiplied > v2_max && multiplied > v3_max && multiplied > test_max, "multiplied block gas limit must exceed all per-tx maxima"); }Optionally, consider deriving the tx-count cap from target block time and an empirical average gas-per-tx to avoid manual drift, e.g., cap ≈ min(hard-cap, block_gas_limit / avg_tx_gas). This keeps throughput aligned as schedules evolve.
757-757: Fix the comment grammar and document the rationale for the exact numberThe magic “+40,000” looks intentional (to clear a 500,000,000 tx cap?). Please encode the intent in the comment.
-pub static G_BASE_BLOCK_GAS_LIMIT: u64 = 500_040_000; //must big than maximum_number_of_gas_units +pub static G_BASE_BLOCK_GAS_LIMIT: u64 = 500_040_000; // must be greater than maximum_number_of_gas_units (kept slightly above the per-tx max to guarantee feasibility)
796-799: Cross-network sanity: multiplied gas limits will be 10x higher on test/dev/halley/dagtest—confirm this is intentionalSeveral network configs set base_block_gas_limit to G_BASE_BLOCK_GAS_LIMIT * 10. With the new base, that yields 5,000,400,000 gas per block on these nets. If the motivation was only to clear the per-tx max, consider whether the extra 10x multiplier is still needed for stress testing, or reduce it to keep test cycles and CI faster.
Also applies to: 850-853, 907-910, 968-971, 1031-1033, 1095-1098, 1170-1173, 1229-1232
block-relayer/src/block_relayer.rs (4)
82-85: Re-enable sync gating or add rate-limiting to avoid broadcast storms during initial syncRemoving this guard will broadcast every historical block during catch-up (NewHeadBlock/NewDagBlock/NewBranch), increasing redundant network traffic and peer load. If this was intentional, consider rate-limiting or conditional broadcasting while syncing. Otherwise, restore the check.
Apply this minimal revert:
- // if !self.is_nearly_synced() { - // debug!("[block-relay] Ignore NewHeadBlock event because the node has not been synchronized yet."); - // return; - // } + if !self.is_nearly_synced() { + debug!("[block-relay] Ignore NewHeadBlock event because the node has not been synchronized yet."); + return; + }If you do need broadcasting during sync, alternatives:
- Broadcast only every Nth block while syncing.
- Broadcast only when block timestamp is within a freshness window of now.
- Broadcast to a sampled subset of peers instead of all peers.
165-170: Swap placeholders in ensure! message (expected vs. collected lengths are flipped)The error message prints collected first where it says “expect”, which is misleading during diagnosis.
- ensure!( - collect_txns.len() == expect_txn_len, - "Fill compact block error, expect txn len: {}, but collect txn len: {}", - collect_txns.len(), - expect_txn_len - ); + ensure!( + collect_txns.len() == expect_txn_len, + "Fill compact block error, expect txn len: {}, but collect txn len: {}", + expect_txn_len, + collect_txns.len() + );
111-135: Process prefilled txns before consulting the txpool (aligns with TODO and avoids redundant work)Filling prefilled entries first reduces lookups and avoids overwriting txpool results. This also tends to reduce network fetches.
- let mut missing_txn_short_ids = HashSet::new(); - // Fill the block txns by tx pool - for (index, short_id) in compact_block.short_ids.iter().enumerate() { - if let Some(txn) = txpool.find_txn(&short_id.0) { - filled_from_txpool += 1; - txns[index] = Some(txn); - } else { - missing_txn_short_ids.insert(short_id); - } - } - - //TODO move prefilled before txpool - - // Fill the block txns by prefilled txn - for prefilled_txn in compact_block.prefilled_txn { - if prefilled_txn.index as usize >= txns.len() { - continue; - } - let id = prefilled_txn.tx.id(); - txns[prefilled_txn.index as usize] = Some(prefilled_txn.tx); - filled_from_prefilled += 1; - missing_txn_short_ids.remove(&ShortId(id)); - } + let mut missing_txn_short_ids = HashSet::new(); + // Fill prefilled first + for prefilled_txn in compact_block.prefilled_txn { + if (prefilled_txn.index as usize) < txns.len() { + txns[prefilled_txn.index as usize] = Some(prefilled_txn.tx); + filled_from_prefilled += 1; + } + } + // Then consult txpool only for the remaining indices + for (index, short_id) in compact_block.short_ids.iter().enumerate() { + if txns[index].is_none() { + if let Some(txn) = txpool.find_txn(&short_id.0) { + filled_from_txpool += 1; + txns[index] = Some(txn); + } else { + missing_txn_short_ids.insert(short_id); + } + } + }
247-247: Typo in log message (“PeerCmpctBlockEvent”)Minor logging polish for easier grep and consistency.
- error!("[block-relay] process PeerCmpctBlockEvent error {:?}", e); + error!("[block-relay] process PeerCompactBlockEvent error {:?}", e);miner/Cargo.toml (1)
37-37: Gateasync-stdbehind a feature inminerWe confirmed that
async-stdis actively used in
miner/src/create_block_template/block_builder_service.rs:530(async_std::task::spawn) from the grep output. To prevent accidental mixing with the Tokio-based parts of the workspace, please:• In
miner/Cargo.toml, change the dependency to:-async-std = { workspace = true } +async-std = { workspace = true, optional = true, default-features = false }• Add a feature flag in the same file:
[features] async-std-runtime = ["async-std"]• Guard all
async-stdimports and calls (e.g.async_std::task::spawn) behind:#[cfg(feature = "async-std-runtime")]This confines the async-std runtime to code paths opt-in via the new feature and avoids subtle cross-runtime issues.
chain/api/src/errors.rs (2)
42-44: Good addition of granular VerifyBlockField variants.Introducing Parents and PruningPoint improves error specificity in DAG verification. Please add short doc comments to clarify intended use (e.g., “parents existence/uniqueness checks”, “pruning point ancestry checks”) for future maintainers.
54-56: Normalize Display casing to match existing variants.Existing variants render as lowercase ("body", "header", "uncle", "consensus", "state"). The new ones render with leading capitals, which is inconsistent and may surprise log scrapers.
Apply:
- Self::Parents => write!(f, "Parents"), - Self::PruningPoint => write!(f, "PruningPoint"), + Self::Parents => write!(f, "parents"), + Self::PruningPoint => write!(f, "pruning_point"),Optional: derive Copy + Clone for VerifyBlockField since it’s a small enum frequently copied in errors:
#[derive(Debug, Copy, Clone)] pub enum VerifyBlockField { ... }account/api/src/message.rs (2)
25-27: Batch signing API: confirm signer resolution semantics.Single-sign API includes an explicit signer, but SignTxnInBatch relies solely on each RawUserTransaction’s sender. That’s fine, but verify downstream code paths don’t assume/require an explicit signer and that mixed-sender batches are supported.
If you need homogeneous batches per signer, consider:
SignTxnInBatch { signer: AccountAddress, txns: Vec<RawUserTransaction>, }Otherwise, please document that mixed senders are allowed and the signer is taken from txn.sender().
36-36: Batch unlock response shape: ensure callers can access per-account results.UnlockAccountInBatch returns a vector of (address, password) plus a Duration, but AccountResponse has only UnlockAccountResponse (unit) and AccountList(Vec<AccountInfo)). Confirm the service responds with AccountList for the batch path so clients can know which accounts succeeded/failed.
I can help wire the batch handler to return AccountList and update RPC/client stubs if needed.
chain/open-block/src/lib.rs (1)
38-38: Sharing ChainStateDB via Arc: check concurrent read/write safety.Switching state to Arc enables aliasing across threads. This OpenedBlock mutates state (apply_write_set/commit). If other components hold a reader obtained via state_reader() concurrently, you may observe inconsistent reads unless ChainStateDB provides internal synchronization and coherent snapshot semantics for readers.
- If you need concurrent readers during block building, consider exposing a snapshotting reader (e.g., a forked view or RemoteStateReader) instead of the live mutable DB.
- Alternatively, wrap in Arc<RwLock> and return a read guard for readers. This is heavier, so use only if necessary.
chain/src/verifier/mod.rs (1)
304-308: PruningPoint error labeling is accurate and helpful.Switching to VerifyBlockField::PruningPoint improves triage for pruning-related ancestry failures. Consider also updating earlier checks in verify_dag (Lines 262 and 269) to use Parents for consistency.
account/src/account_manager.rs (2)
102-119: Consider refactoring to reduce code duplicationThe
unlock_account_in_batchmethod duplicates the account loading and caching logic fromunlock_account. Both methods share the same pattern of loading accounts, caching passwords, and returning account info.Consider extracting the common logic:
+ fn unlock_account_internal( + &self, + address: AccountAddress, + password: String, + duration: Duration, + ) -> AccountResult<AccountInfo> { + let account = Account::load(address, Some(password.clone()), self.store.clone())? + .ok_or(AccountError::AccountNotExist(address))?; + let ttl = std::time::Instant::now().add(duration); + self.key_cache + .write() + .cache_pass(address, password, ttl); + Ok(account.info()) + } + pub fn unlock_account( &self, address: AccountAddress, password: &str, duration: Duration, ) -> AccountResult<AccountInfo> { - let account = Account::load(address, Some(password.to_string()), self.store.clone())? - .ok_or(AccountError::AccountNotExist(address))?; - let ttl = std::time::Instant::now().add(duration); - self.key_cache - .write() - .cache_pass(address, password.to_string(), ttl); - Ok(account.info()) + self.unlock_account_internal(address, password.to_string(), duration) } pub fn unlock_account_in_batch( &self, batch: Vec<(AccountAddress, String)>, duration: Duration, ) -> AccountResult<Vec<AccountInfo>> { let mut accounts = vec![]; for (address, password) in batch.into_iter() { - let account = Account::load(address, Some(password.to_string()), self.store.clone())? - .ok_or(AccountError::AccountNotExist(address))?; - let ttl = std::time::Instant::now().add(duration); - self.key_cache - .write() - .cache_pass(address, password.to_string(), ttl); - accounts.push(account.info()); + accounts.push(self.unlock_account_internal(address, password, duration)?); } Ok(accounts) }
108-109: Redundant string conversionThe password is already a
Stringfrom the vector iterator, so calling.to_string()creates an unnecessary allocation.- let account = Account::load(address, Some(password.to_string()), self.store.clone())? + let account = Account::load(address, Some(password.clone()), self.store.clone())?And on line 114:
- .cache_pass(address, password.to_string(), ttl); + .cache_pass(address, password, ttl);chain/src/chain.rs (1)
476-477: Simplify block ID extractionThe block ID is captured early but then extracted again from
verified_block.block.header().id(). This creates redundancy.Since
block_idis already captured at line 476, reuse it consistently throughout:- let block_id = verified_block.block.header().id(); - info!("execute dag block:{:?}", block_id); + let block_id = verified_block.block.header().id(); + info!("execute dag block:{:?}", block_id);sync/src/block_connector/execute_service.rs (2)
83-94: Consider making timeout configurableThe hardcoded 30-second timeout (3000 * 10ms) for waiting on parent blocks should be configurable to adapt to different network conditions.
Consider extracting this as a configuration parameter:
+const PARENT_READY_TIMEOUT_MS: u64 = 30000; // 30 seconds +const PARENT_CHECK_INTERVAL_MS: u64 = 10; for parent_id in new_block.header().parents_hash() { - let mut count: u64 = 3000; + let mut count: u64 = PARENT_READY_TIMEOUT_MS / PARENT_CHECK_INTERVAL_MS; while !Self::check_parent_ready(parent_id, storage.clone(), dag.clone())? && count > 0 { - async_std::task::sleep(std::time::Duration::from_millis(10)).await; + async_std::task::sleep(std::time::Duration::from_millis(PARENT_CHECK_INTERVAL_MS)).await; count = count.saturating_sub(1); if count == 0 { return Err(anyhow::anyhow!( "wait dag block timeout, for block id: {:?}", parent_id )); } } }
248-288: Consider adding error metrics for peer block failuresThe async task for peer block execution properly handles errors by sending failure info through the channel. Consider adding metrics to track peer block execution failures for monitoring.
Would you like me to suggest a metrics implementation to track execution failures per peer?
account/service/src/service.rs (2)
140-142: Batch signing path added — confirm semantics (all-or-nothing vs partial successes) and add basic telemetryThe implementation is straightforward and correct. Two suggestions:
- Clarify whether a single locked/missing account should fail the entire batch (current behavior) or return per-item results; if the latter, response type would need redesign across API layers.
- Add lightweight tracing (batch size, elapsed) to observe usage patterns and latency.
Example minimal instrumentation:
AccountRequest::SignTxnInBatch { txns } => { - AccountResponse::SignedTxnList(self.manager.sign_txn_in_batch(txns)?) + let _span = trace_span!("account.sign_txn_in_batch", size = txns.len()).entered(); + AccountResponse::SignedTxnList(self.manager.sign_txn_in_batch(txns)?) }
152-155: Batch unlock path added — confirm default duration semantics and guard against large batchesLooks correct. Two items to verify:
- Duration defaults: upstream RPC maps None to u32::MAX seconds; ensure that’s intentional and documented to avoid unbounded unlocks.
- Consider enforcing a max batch size and emitting a warning/metric to protect wallet storage and key-cache from spikes.
If you want, I can propose a limit (e.g., 1024) and return an error when exceeded.
rpc/api/src/account/mod.rs (2)
45-50: Add docstring clarifying signer resolution and output ordering for sign_txn_in_batchPlease document that:
- Each RawUserTransaction’s sender determines the signer; those accounts must be unlocked.
- The returned SignedUserTransaction list preserves the input order.
- Failure semantics (entire batch errors vs partial).
Proposed docs:
#[rpc(name = "account.sign_txn_in_batch")] fn sign_txn_in_batch( &self, raw_txn: Vec<RawUserTransaction>, ) -> FutureResult<Vec<SignedUserTransaction>>; +/// Notes: +/// - Signer is inferred from each txn.sender; accounts must be unlocked. +/// - Output order matches input order. +/// - Current behavior: any failure errors the whole batch.
60-67: unlock_in_batch: document None-duration behavior and return orderingThe RPC maps None to u32::MAX server-side. Please reflect that here and state that results are ordered to the input and that accounts with wrong passwords cause the entire batch to fail (if that’s the intended behavior).
/// unlock accounts for duration in seconds, default to u32::max. #[rpc(name = "account.unlock_in_batch")] fn unlock_in_batch( &self, batch: Vec<(AccountAddress, String)>, duration: Option<u32>, ) -> FutureResult<Vec<AccountInfo>>; +/// Notes: +/// - If duration is None, the server unlocks accounts for u32::MAX seconds. +/// - Output order matches input order. +/// - Current behavior: any failure errors the whole batch.rpc/api/src/txpool/mod.rs (3)
19-21: New submit_transactions RPC — clarify behavior on partial validation failuresGood addition. Please define whether add_txns(txns) is all-or-nothing or per-tx error reporting. Currently the server collects ids and returns them if add_txns succeeds; if any tx fails, does the entire batch fail? If partial success is possible, consider returning a per-tx result vector.
If you decide to keep all-or-nothing, add docs to make this explicit.
46-53: next_sequence_number_in_batch: specify None semantics and stability of orderingThe double-Option return is subtle. Please document:
- Outer None means pool read lock unavailable (non-blocking behavior).
- Inner Option per address is Some(next_seq) if there are pending txns for that sender in txpool, else None.
- Output order matches input order.
Suggested docs:
/// Returns next valid sequence number for given sender /// or `None` if there are no pending transactions from that sender in txpool. #[rpc(name = "txpool.next_sequence_number_in_batch")] fn next_sequence_number_in_batch( &self, addresses: Vec<AccountAddress>, ) -> FutureResult<Option<Vec<(AccountAddress, Option<u64>)>>>; +/// Notes: +/// - Outer None: txpool lock not acquired; caller may retry. +/// - Inner None per address: no pending txns for that sender. +/// - Preserves input order 1:1 with addresses.
54-57: Misplaced/copy-paste doc on txpool.stateThe comment “or None if there are no pending transactions…” seems copied from the sequence-number method and doesn’t describe txpool.state. Please fix the doc.
-/// or `None` if there are no pending transactions from that sender in txpool. #[rpc(name = "txpool.state")] fn state(&self) -> FutureResult<TxPoolStatus>;txpool/src/pool/queue.rs (1)
468-496: Batch next_sequence_number: good read-lock scoping; minor allocation/clone tweaksThe logic is correct and consistent with the single-address variant. Two micro-improvements:
- Hoist stale_id and a single client clone outside the loop to avoid repetitive setup.
- Optional: pre-allocate the Vec using addresses.len().
pub fn next_sequence_number_in_batch<C: client::AccountSeqNumberClient>( &self, client: C, addresses: Vec<Address>, ) -> Option<Vec<(Address, Option<u64>)>> { - let pool = match self.pool.try_read() { + let pool = match self.pool.try_read() { Some(pool) => pool, None => return None, }; - - Some( - addresses - .iter() - .map(|address| { - // Also we ignore stale transactions in the queue. - let stale_id = None; - let state_readiness = ready::State::new(client.clone(), stale_id); - ( - *address, - pool.pending_from_sender(state_readiness, address) - .last() - .map(|tx| tx.signed().sequence_number().saturating_add(1)), - ) - }) - .collect::<Vec<_>>(), - ) + // Also we ignore stale transactions in the queue. + let stale_id = None; + let client = client.clone(); + let mut out = Vec::with_capacity(addresses.len()); + for address in addresses.iter() { + let state_readiness = ready::State::new(client.clone(), stale_id); + let next = pool + .pending_from_sender(state_readiness, address) + .last() + .map(|tx| tx.signed().sequence_number().saturating_add(1)); + out.push((*address, next)); + } + Some(out) }rpc/server/src/module/account_rpc.rs (2)
164-175: Rename batch parameter to raw_txns for clarity and consistency.The parameter carries a Vec, but is named in singular form. Align with intent and other batch APIs.
Apply this diff:
- fn sign_txn_in_batch( - &self, - raw_txn: Vec<RawUserTransaction>, - ) -> FutureResult<Vec<SignedUserTransaction>> { + fn sign_txn_in_batch( + &self, + raw_txns: Vec<RawUserTransaction>, + ) -> FutureResult<Vec<SignedUserTransaction>> { let service = self.account.clone(); let fut = async move { - let result = service.sign_txn_in_batch(raw_txn).await?; + let result = service.sign_txn_in_batch(raw_txns).await?; Ok(result) } .map_err(map_err); Box::pin(fut.boxed()) }
197-213: Confirm default unlock duration of u32::MAX seconds is intentional; consider a config-driven default and max clamp.Using ~136 years as the default unlock TTL could surprise operators if the caller omits duration. If product intent is “indefinite until restart,” document it; otherwise, consider reading a default TTL from NodeConfig (if available) and enforcing a sane upper bound (e.g., hours/days).
Would you like me to wire this to a config key and add validation?
txpool/api/src/lib.rs (2)
48-53: Clarify state_root-based API semantics in the docs.The comment still reads generically. Explicitly mention what state_root represents to avoid ambiguity during state-root migration.
Apply this doc tweak:
- /// alike get_pending_txns, it needs the pool client with the specific account state + /// Like `get_pending_txns`, but evaluates readiness against a specific global account state + /// identified by `state_root` (the Sparse Merkle Tree root). The pool consults this state + /// when determining nonces/balances for transaction readiness. fn get_pending_with_state(
59-65: Document outer vs inner Option semantics for batch sequence queries.Outer None means the pool read lock wasn’t acquired (non-blocking behavior); inner None means “no pending txns for this address.” Callers need this distinction.
Apply this doc tweak:
- /// Returns next valid sequence number for given sender - /// or `None` if there are no pending transactions from that sender. + /// Returns next valid sequence numbers in batch. + /// - Returns `None` (outer) if the pool read lock isn’t available (non-blocking fallback). + /// - Returns `Some(Vec<(address, seq_opt)>)` otherwise, where `seq_opt == None` indicates + /// there are no pending transactions for that address in the pool. fn next_sequence_number_in_batch(txpool/mock-service/src/lib.rs (3)
105-112: Provide non-panicking stubs for get_pending_with_state in the mock.Returning a sensible fallback keeps tests using the mock from accidentally panicking.
Apply this diff:
fn get_pending_with_state( &self, _max_len: u64, _current_timestamp_secs: Option<u64>, _state_root: HashValue, ) -> Vec<SignedUserTransaction> { - todo!() + // Ignore state_root in the mock; reuse existing behavior. + self.get_pending_txns(Some(_max_len), _current_timestamp_secs) }
114-121: Return a safe default instead of todo!() for next_sequence_number_with_state.Keeps the mock usable even if this path is hit.
Apply this diff:
fn next_sequence_number_with_state( &self, _address: AccountAddress, _state_root: HashValue, ) -> Option<u64> { - todo!() + // Mock: no pending txns tracked by address; return None. + None }
122-127: Implement a harmless batch default in the mock to avoid runtime panics.Map all addresses to None to reflect “no pending” semantics.
Apply this diff:
fn next_sequence_number_in_batch( &self, _addresses: Vec<AccountAddress>, ) -> Option<Vec<(AccountAddress, Option<u64>)>> { - todo!() + Some(_addresses.into_iter().map(|a| (a, None)).collect()) }account/api/src/service.rs (3)
37-41: Name the batch parameter raw_txns and add brief docs on error semantics.Pluralize the Vec parameter and document that the call fails on the first error (no partial success), matching manager behavior.
Apply this diff:
- async fn sign_txn_in_batch( - &self, - raw_txn: Vec<RawUserTransaction>, - ) -> Result<Vec<SignedUserTransaction>>; + /// Signs a batch of raw user transactions. + /// Fails the whole batch on the first error; no partial successes are returned. + async fn sign_txn_in_batch( + &self, + raw_txns: Vec<RawUserTransaction>, + ) -> Result<Vec<SignedUserTransaction>>;
173-185: Keep naming consistent, and fix the panic message spelling.Use raw_txns and correct “Unexpect” → “Unexpected”. Consider returning an error instead of panic to match Result ergonomics, but that aligns with existing style and can be refactored separately.
Apply this diff:
- async fn sign_txn_in_batch( - &self, - raw_txn: Vec<RawUserTransaction>, - ) -> Result<Vec<SignedUserTransaction>> { + async fn sign_txn_in_batch( + &self, + raw_txns: Vec<RawUserTransaction>, + ) -> Result<Vec<SignedUserTransaction>> { let response = self - .send(AccountRequest::SignTxnInBatch { txns: raw_txn }) + .send(AccountRequest::SignTxnInBatch { txns: raw_txns }) .await??; if let AccountResponse::SignedTxnList(txns) = response { Ok(txns) } else { - panic!("Unexpect response type.") + panic!("Unexpected response type.") } }
203-216: Add brief docs for unlock_account_in_batch and fix “Unexpect” → “Unexpected”.Docs clarify batch failure semantics; spelling fix improves consistency.
Apply this diff:
- async fn unlock_account_in_batch( + /// Unlock multiple accounts in one call. Fails the whole batch on the first error. + async fn unlock_account_in_batch( &self, batch: Vec<(AccountAddress, String)>, duration: std::time::Duration, ) -> Result<Vec<AccountInfo>> { let response = self .send(AccountRequest::UnlockAccountInBatch(batch, duration)) .await??; if let AccountResponse::AccountList(account_info) = response { Ok(account_info) } else { - panic!("Unexpect response type.") + panic!("Unexpected response type.") } }cmd/tx-factory/src/main.rs (4)
589-605: LGTM; simplified mapping is fine.This is a clear and correct simplification. Consider reusing
next_sequence_number_in_batchfor singletons to remove duplication, but not necessary.
607-651: Minor efficiency/readability tweaks for batch send.
- Compute
expiration_timestamponce per sender (or once globally) rather than per txn inside the inner loop.- For very large
round_num, assembling all txns into memory may spike RAM. Consider chunked submission.I can provide a chunked submission variant if you expect large batches.
87-87: Remove stale commented code to reduce noise.The commented
INITIAL_BALANCEpath is no longer used; consider deleting to keep the codebase lean.Also applies to: 490-499
662-689: Stress split: add guardrails for odd/small account counts.
mid = sequences.len() >> 1with small/odd sizes can produce empty halves or uneven pairing. Add a sanity check and early return/log whenmid == 0ormid == sequences.len().If desired, I can send a small patch that enforces even account counts or rounds down and logs.
txpool/src/tx_pool_service_impl.rs (2)
184-195: Batch sequence API wiring looks good.Metrics label reuse is fine; consider a distinct label (e.g.,
next_sequence_number_in_batch) if you need separate observability.
197-210: Naming mismatch:next_sequence_number_with_headeruses a state_root.The inner method name no longer reflects its parameter. Consider renaming to
next_sequence_number_with_statefor clarity and to match the trait naming.Apply this diff if you agree (local rename):
- pub(crate) fn next_sequence_number_with_header( + pub(crate) fn next_sequence_number_with_state( &self, address: AccountAddress, state_root: HashValue, ) -> Option<u64> { @@ - self.queue.next_sequence_number(pool_client, &address) + self.queue.next_sequence_number(pool_client, &address) }And update the caller at Line 208 accordingly.
miner/src/lib.rs (1)
290-293: Nit: reuse the computedblock_hashin log.You already compute
block_hash; use it in the log for consistency.- info!(target: "miner", "Minted new block: {}", block.id()); + info!(target: "miner", "Minted new block: {}", block_hash);miner/src/create_block_template/process_transaction.rs (1)
129-205: Processing loop looks correct; consider removing_untouched_user_txns.The
_untouched_user_txnsis computed but unused; consider dropping the variable to reduce noise.Also, thanks for removing invalid txns from the pool; that keeps the mempool clean.
miner/src/create_block_template/block_builder_service.rs (5)
616-645: Prevent duplicates and cap the batch size infetch_transactionsto avoid wasted pre-exec work.
- Duplicates: Appending uncle (blue) block txns may duplicate txns already returned by the txpool or across uncles. This wastes VM cycles during pre-execution.
- Cap: After merging and sorting you may exceed
max_txns. Truncate to respect the cap and limit pre-exec load.Apply:
- use std::collections::VecDeque; + use std::collections::{HashSet, VecDeque}; @@ - blue_blocks.iter().for_each(|block| { - block.transactions().iter().for_each(|transaction| { - pending_transactions.push(transaction.clone()); - }) - }); + // Dedup by txn id while merging uncle transactions + let mut seen: HashSet<HashValue> = + pending_transactions.iter().map(|t| t.id()).collect(); + for block in blue_blocks.iter() { + for txn in block.transactions().iter() { + if seen.insert(txn.id()) { + pending_transactions.push(txn.clone()); + } + } + } @@ - Ok(pending_transactions) + // Respect the requested cap to avoid unnecessary pre-execution + if pending_transactions.len() > max_txns as usize { + pending_transactions.truncate(max_txns as usize); + } + Ok(pending_transactions)Also applies to: 1-1
531-559: Don’texpectin the background pre-exec task; convert to error logging and early return.A panic here crashes the miner’s background worker, which is avoidable and makes diagnosing transient txpool/state issues harder. Prefer logging + early return on errors from
ProcessTransactionData::new(),.process(), andsender.send().- async_std::task::spawn(async move { - let process_trans = ProcessTransactionData::new( + async_std::task::spawn(async move { + let process_trans = match ProcessTransactionData::new( storage, selected_header, Arc::new(main.into_statedb()), txn_provider, txn, block_gas_limit, 0, block_meta.clone(), vm_metrics, - ) - .expect("failed to init process transaction"); + ) { + Ok(v) => v, + Err(e) => { + error!("failed to init process transaction: {:?}", e); + return; + } + }; - let result = process_trans - .process() - .expect("failed to process transaction"); + let result = match process_trans.process() { + Ok(r) => r, + Err(e) => { + error!("failed to process transaction: {:?}", e); + return; + } + }; - sender - .send(ProcessHeaderTemplate { + if let Err(e) = sender.send(ProcessHeaderTemplate { header: previous_header, uncles, difficulty, strategy, transaction_outputs: result, block_metadata: block_meta, pruning_point, - }) - .expect("failed to send result"); + }) { + error!("failed to send ProcessHeaderTemplate: {:?}", e); + } });
153-201: Bound concurrency/backpressure to prevent runaway spawning under bursty request rates.Each
BlockTemplateRequestspawns a pre-exec task, and the channel is unbounded. Under bursts (e.g., fast headers or retries), tasks and queued results can grow without bound.Options:
- Use a bounded
crossbeam::channel::bounded(N)andtry_sendto drop or coalesce when full.- Track in-flight pre-exec tasks with an
AtomicUsizeand skip spawning if above a threshold (e.g., 16; aligns with prior MinerService task-pool learning).- Debounce
BlockTemplateRequestevents or coalesce pending requests.I can prototype a bounded-channel variant if you confirm desired semantics (drop, latest-wins, or backpressure).
Also applies to: 530-560
153-201: Minor: avoid repeatedget_sharedin the hot loop (micro).Inside the 10ms interval closure,
ctx.get_shared::<Arc<Storage>>()is called every tick. It’s cheap (Arc clone), but you can read it once into a local (captured by the closure) by retrieving it just beforerun_intervaland moving it in. Not a blocker.
564-608: Large commented-out legacy path — consider removing or gating with a feature flag.The old
OpenedBlockpath is kept commented. If it’s no longer needed, delete it to reduce noise; if it’s for fallback, consider#[cfg(feature = "legacy-template")]to keep it compile-safe.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (29)
account/api/src/message.rs(3 hunks)account/api/src/service.rs(3 hunks)account/service/src/service.rs(2 hunks)account/src/account_manager.rs(2 hunks)block-relayer/src/block_relayer.rs(1 hunks)chain/api/src/errors.rs(2 hunks)chain/open-block/src/lib.rs(3 hunks)chain/src/chain.rs(4 hunks)chain/src/verifier/mod.rs(2 hunks)cmd/tx-factory/src/main.rs(10 hunks)config/src/genesis_config.rs(1 hunks)miner/Cargo.toml(1 hunks)miner/src/create_block_template/block_builder_service.rs(11 hunks)miner/src/create_block_template/mod.rs(1 hunks)miner/src/create_block_template/process_transaction.rs(1 hunks)miner/src/lib.rs(5 hunks)miner/tests/miner_test.rs(1 hunks)rpc/api/src/account/mod.rs(2 hunks)rpc/api/src/txpool/mod.rs(2 hunks)rpc/client/src/lib.rs(3 hunks)rpc/server/src/module/account_rpc.rs(2 hunks)rpc/server/src/module/txpool_rpc.rs(2 hunks)sync/src/block_connector/execute_service.rs(6 hunks)txpool/api/src/lib.rs(1 hunks)txpool/mock-service/src/lib.rs(1 hunks)txpool/src/pool/queue.rs(1 hunks)txpool/src/pool_client.rs(3 hunks)txpool/src/tx_pool_service_impl.rs(5 hunks)vm/vm-runtime/src/starcoin_vm.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-03T03:25:16.732Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4572
File: miner/src/create_block_template/new_header_service.rs:0-0
Timestamp: 2025-07-03T03:25:16.732Z
Learning: In Starcoin's miner/src/create_block_template/new_header_service.rs, panic! is intentionally used in impossible code branches (like equal block ID comparison after early equality check) to detect logical errors early and ensure immediate restart rather than allowing potentially corrupted state to continue.
Applied to files:
miner/tests/miner_test.rsminer/src/lib.rsminer/src/create_block_template/block_builder_service.rs
📚 Learning: 2025-08-08T10:20:45.797Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: chain/src/chain.rs:1104-1106
Timestamp: 2025-08-08T10:20:45.797Z
Learning: In starcoin/chain/src/chain.rs, ChainReader::get_block(hash) is intentionally implemented to return any block from storage without verifying membership in the current main chain/DAG view (the previous exist_block_filter was removed). Callers that require main-chain-only results should perform their own existence/membership checks (e.g., exist_block/check_exist_block) as needed.
Applied to files:
chain/src/verifier/mod.rssync/src/block_connector/execute_service.rschain/src/chain.rs
📚 Learning: 2025-08-08T10:27:43.881Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: txpool/src/pool/queue.rs:389-392
Timestamp: 2025-08-08T10:27:43.881Z
Learning: In starcoinorg/starcoin#4605, txpool/src/pool/queue.rs: For PendingOrdering::Priority, if pool.try_read() fails, the design is to return an empty Vec and proceed with block building without waiting for the lock (non-blocking behavior is intentional).
Applied to files:
txpool/src/pool/queue.rstxpool/api/src/lib.rstxpool/src/pool_client.rstxpool/mock-service/src/lib.rsminer/src/create_block_template/block_builder_service.rs
📚 Learning: 2025-08-08T10:25:49.039Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: txpool/mock-service/src/lib.rs:114-120
Timestamp: 2025-08-08T10:25:49.039Z
Learning: In PR starcoinorg/starcoin#4605, txpool/mock-service/src/lib.rs: MockTxPoolService::next_sequence_number_with_header is currently unused; keeping todo!() in the mock is acceptable and won’t affect runtime unless invoked.
Applied to files:
txpool/src/pool/queue.rsrpc/server/src/module/txpool_rpc.rstxpool/api/src/lib.rstxpool/src/pool_client.rstxpool/mock-service/src/lib.rstxpool/src/tx_pool_service_impl.rs
📚 Learning: 2025-08-08T10:16:46.394Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: chain/src/chain.rs:185-187
Timestamp: 2025-08-08T10:16:46.394Z
Learning: In starcoin/chain/src/chain.rs, BlockChain::statedb currently consumes self to move out ChainStateDB during block building to avoid cloning or recreating the state DB for transaction filtering. This is intentional for performance; a rename to into_statedb is acceptable but behavior should remain a consuming getter.
Applied to files:
txpool/src/pool_client.rschain/src/chain.rschain/open-block/src/lib.rs
📚 Learning: 2025-02-10T10:00:57.422Z
Learnt from: sanlee42
PR: starcoinorg/starcoin#4394
File: miner/src/lib.rs:259-263
Timestamp: 2025-02-10T10:00:57.422Z
Learning: The MinerService's task_pool is limited to 16 elements (DEFAULT_TASK_POOL_SIZE), making Vec a suitable data structure despite O(n) front removal operations.
Applied to files:
miner/src/create_block_template/block_builder_service.rs
🧬 Code graph analysis (22)
rpc/api/src/account/mod.rs (3)
account/api/src/service.rs (2)
sign_txn_in_batch(37-40)sign_txn_in_batch(173-185)account/src/account_manager.rs (1)
sign_txn_in_batch(276-298)rpc/server/src/module/account_rpc.rs (2)
sign_txn_in_batch(164-175)unlock_in_batch(197-213)
rpc/client/src/lib.rs (6)
cmd/tx-factory/src/main.rs (1)
next_sequence_number_in_batch(562-587)txpool/src/pool/queue.rs (2)
next_sequence_number_in_batch(470-496)addresses(481-494)rpc/api/src/txpool/mod.rs (3)
next_sequence_number_in_batch(49-52)submit_transaction(17-17)submit_transactions(20-20)rpc/server/src/module/txpool_rpc.rs (3)
next_sequence_number_in_batch(112-118)submit_transaction(34-44)submit_transactions(46-59)txpool/api/src/lib.rs (1)
next_sequence_number_in_batch(61-64)txpool/src/tx_pool_service_impl.rs (2)
next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)
account/src/account_manager.rs (3)
account/api/src/service.rs (4)
unlock_account_in_batch(47-51)unlock_account_in_batch(203-216)sign_txn_in_batch(37-40)sign_txn_in_batch(173-185)account/src/account.rs (2)
address(154-156)load(66-103)rpc/server/src/module/account_rpc.rs (1)
sign_txn_in_batch(164-175)
cmd/tx-factory/src/main.rs (5)
txpool/src/pool/queue.rs (4)
next_sequence_number_in_batch(470-496)addresses(481-494)new(176-181)new(230-250)rpc/api/src/txpool/mod.rs (1)
next_sequence_number_in_batch(49-52)rpc/client/src/lib.rs (5)
next_sequence_number_in_batch(300-306)state_reader(534-539)new(107-112)new(138-174)new(1220-1236)rpc/server/src/module/txpool_rpc.rs (1)
next_sequence_number_in_batch(112-118)txpool/src/tx_pool_service_impl.rs (2)
next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)
vm/vm-runtime/src/starcoin_vm.rs (2)
vm/vm-runtime/src/lib.rs (1)
execute_block(37-42)vm/vm-runtime/src/parallel_executor/mod.rs (1)
execute_block(67-111)
chain/src/verifier/mod.rs (2)
chain/src/chain.rs (5)
has_dag_block(1365-1376)e(173-173)e(688-688)e(1600-1600)is_dag_ancestor_of(1394-1396)types/src/block/mod.rs (1)
pruning_point(341-343)
txpool/src/pool/queue.rs (7)
cmd/tx-factory/src/main.rs (2)
next_sequence_number_in_batch(562-587)new(225-258)rpc/api/src/txpool/mod.rs (1)
next_sequence_number_in_batch(49-52)rpc/client/src/lib.rs (4)
next_sequence_number_in_batch(300-306)new(107-112)new(138-174)new(1220-1236)rpc/server/src/module/txpool_rpc.rs (1)
next_sequence_number_in_batch(112-118)txpool/api/src/lib.rs (2)
next_sequence_number_in_batch(61-64)new(100-102)txpool/mock-service/src/lib.rs (3)
next_sequence_number_in_batch(122-127)pool(60-67)new(22-24)txpool/src/tx_pool_service_impl.rs (3)
next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)new(37-74)
rpc/server/src/module/txpool_rpc.rs (6)
rpc/api/src/txpool/mod.rs (2)
submit_transactions(20-20)next_sequence_number_in_batch(49-52)rpc/client/src/lib.rs (2)
submit_transactions(313-319)next_sequence_number_in_batch(300-306)rpc/server/src/module/mod.rs (1)
convert_to_rpc_error(64-67)txpool/src/pool/queue.rs (3)
ready(396-412)next_sequence_number_in_batch(470-496)addresses(481-494)txpool/api/src/lib.rs (1)
next_sequence_number_in_batch(61-64)txpool/src/tx_pool_service_impl.rs (2)
next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)
txpool/api/src/lib.rs (7)
txpool/mock-service/src/lib.rs (4)
get_pending_with_state(105-112)next_sequence_number(74-76)next_sequence_number_in_batch(122-127)next_sequence_number_with_state(114-120)txpool/src/tx_pool_service_impl.rs (6)
get_pending_with_state(145-170)next_sequence_number(174-182)next_sequence_number(380-383)next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)next_sequence_number_with_state(197-210)txpool/src/pool/queue.rs (3)
next_sequence_number(448-466)next_sequence_number_in_batch(470-496)addresses(481-494)rpc/api/src/txpool/mod.rs (2)
next_sequence_number(44-44)next_sequence_number_in_batch(49-52)rpc/server/src/module/txpool_rpc.rs (2)
next_sequence_number(107-110)next_sequence_number_in_batch(112-118)cmd/tx-factory/src/main.rs (1)
next_sequence_number_in_batch(562-587)rpc/client/src/lib.rs (1)
next_sequence_number_in_batch(300-306)
txpool/src/pool_client.rs (2)
chain/open-block/src/lib.rs (2)
state_root(131-133)new(54-112)txpool/src/tx_pool_service_impl.rs (1)
new(37-74)
txpool/mock-service/src/lib.rs (6)
txpool/api/src/lib.rs (3)
get_pending_with_state(48-53)next_sequence_number_with_state(67-71)next_sequence_number_in_batch(61-64)txpool/src/tx_pool_service_impl.rs (4)
get_pending_with_state(145-170)next_sequence_number_with_state(197-210)next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)cmd/tx-factory/src/main.rs (1)
next_sequence_number_in_batch(562-587)txpool/src/pool/queue.rs (1)
next_sequence_number_in_batch(470-496)rpc/client/src/lib.rs (1)
next_sequence_number_in_batch(300-306)rpc/server/src/module/txpool_rpc.rs (1)
next_sequence_number_in_batch(112-118)
account/api/src/service.rs (3)
account/src/account_manager.rs (4)
sign_txn_in_batch(276-298)unlock_account(87-100)unlock_account_in_batch(102-119)account_info(223-238)rpc/api/src/account/mod.rs (1)
sign_txn_in_batch(46-49)rpc/server/src/module/account_rpc.rs (1)
sign_txn_in_batch(164-175)
rpc/server/src/module/account_rpc.rs (4)
account/api/src/service.rs (2)
sign_txn_in_batch(37-40)sign_txn_in_batch(173-185)account/src/account_manager.rs (1)
sign_txn_in_batch(276-298)rpc/api/src/account/mod.rs (2)
sign_txn_in_batch(46-49)unlock_in_batch(62-66)rpc/client/src/lib.rs (1)
map_err(1239-1241)
rpc/api/src/txpool/mod.rs (7)
rpc/client/src/lib.rs (2)
submit_transactions(313-319)next_sequence_number_in_batch(300-306)rpc/server/src/module/txpool_rpc.rs (2)
submit_transactions(46-59)next_sequence_number_in_batch(112-118)cmd/tx-factory/src/main.rs (1)
next_sequence_number_in_batch(562-587)txpool/src/pool/queue.rs (2)
next_sequence_number_in_batch(470-496)addresses(481-494)txpool/api/src/lib.rs (1)
next_sequence_number_in_batch(61-64)txpool/mock-service/src/lib.rs (1)
next_sequence_number_in_batch(122-127)txpool/src/tx_pool_service_impl.rs (2)
next_sequence_number_in_batch(184-195)next_sequence_number_in_batch(385-391)
account/service/src/service.rs (2)
block-relayer/src/block_relayer.rs (1)
txns(164-164)rpc/api/src/types.rs (1)
txns(808-810)
miner/src/create_block_template/process_transaction.rs (2)
executor/src/executor.rs (1)
execute_transactions(13-19)miner/src/create_block_template/block_builder_service.rs (1)
new(289-318)
sync/src/block_connector/execute_service.rs (3)
chain/src/chain.rs (10)
info(1031-1037)new(77-88)time_service(196-198)time_service(1192-1194)dag(200-202)execute(1244-1246)e(173-173)e(688-688)e(1600-1600)get_block(1110-1112)miner/src/create_block_template/block_builder_service.rs (17)
crossbeam(303-303)new(289-318)dag(791-799)ctx(95-95)ctx(112-112)ctx(117-117)ctx(118-118)ctx(119-119)ctx(125-125)ctx(138-138)ctx(150-150)ctx(151-151)ctx(153-153)ctx(160-161)ctx(163-163)ctx(206-206)ctx(207-207)sync/src/block_connector/write_block_chain.rs (1)
execute(422-432)
chain/src/chain.rs (4)
types/src/block/mod.rs (5)
block_id(1050-1052)state_root(365-367)header(845-847)header(1222-1224)header(1238-1240)executor/src/block_executor.rs (2)
statedb(132-133)block_execute(64-126)state/api/src/chain_state.rs (1)
state_root(128-128)state/statedb/src/lib.rs (1)
state_root(484-486)
chain/open-block/src/lib.rs (3)
chain/src/chain.rs (2)
new(77-88)chain_state(1795-1797)chain/api/src/chain.rs (1)
chain_state(138-138)rpc/client/src/lib.rs (1)
state_reader(534-539)
miner/src/lib.rs (1)
miner/src/create_block_template/block_builder_service.rs (12)
create_block_template(417-608)stopped(205-209)ctx(95-95)ctx(112-112)ctx(117-117)ctx(118-118)ctx(119-119)ctx(125-125)ctx(138-138)ctx(150-150)handle_event(213-222)handle_event(226-242)
miner/src/create_block_template/block_builder_service.rs (4)
types/src/block/mod.rs (7)
block_info(1218-1220)block(1214-1216)state_root(365-367)uncles(852-857)uncles(1234-1236)timestamp(345-347)author(353-355)sync/src/block_connector/execute_service.rs (13)
ctx(167-167)ctx(169-169)ctx(170-170)ctx(178-178)ctx(179-179)ctx(181-181)ctx(188-189)ctx(235-235)ctx(236-236)crossbeam(43-43)new(42-51)handle_event(242-289)handle_event(293-321)miner/src/create_block_template/process_transaction.rs (1)
new(59-89)vm/types/src/block_metadata/mod.rs (3)
timestamp(137-139)new_with_parents(76-103)author(154-156)
txpool/src/tx_pool_service_impl.rs (7)
txpool/api/src/lib.rs (4)
get_pending_with_state(48-53)next_sequence_number_in_batch(61-64)next_sequence_number_with_state(67-71)new(100-102)txpool/mock-service/src/lib.rs (4)
get_pending_with_state(105-112)next_sequence_number_in_batch(122-127)next_sequence_number_with_state(114-120)new(22-24)chain/open-block/src/lib.rs (1)
state_root(131-133)cmd/tx-factory/src/main.rs (1)
next_sequence_number_in_batch(562-587)txpool/src/pool/queue.rs (2)
next_sequence_number_in_batch(470-496)addresses(481-494)rpc/client/src/lib.rs (1)
next_sequence_number_in_batch(300-306)rpc/server/src/module/txpool_rpc.rs (1)
next_sequence_number_in_batch(112-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: run benchmark
| pub fn sign_txn_in_batch( | ||
| &self, | ||
| raw_txns: Vec<RawUserTransaction>, | ||
| ) -> AccountResult<Vec<SignedUserTransaction>> { | ||
| let mut signed_transactions = vec![]; | ||
| for raw_txn in raw_txns { | ||
| let signer_address = raw_txn.sender(); // TODO: check if the signer is the same as the one in the txi | ||
| let pass = self.key_cache.write().get_pass(&signer_address); | ||
| match pass { | ||
| None => return Err(AccountError::AccountLocked(signer_address)), | ||
| Some(p) => { | ||
| let account = Account::load(signer_address, Some(p), self.store.clone())? | ||
| .ok_or(AccountError::AccountNotExist(signer_address))?; | ||
| signed_transactions.push( | ||
| account | ||
| .sign_txn(raw_txn) | ||
| .map_err(AccountError::TransactionSignError)?, | ||
| ); | ||
| } | ||
| } | ||
| } | ||
| Ok(signed_transactions) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Early return on first error will leave partial state
The sign_txn_in_batch method returns immediately when encountering the first locked account, which could leave the caller with an incomplete understanding of which accounts are locked.
The current implementation will fail-fast on the first locked account, potentially masking other locked accounts in the batch. Consider either:
- Collecting all errors and returning a comprehensive error result
- Processing all unlocked accounts and returning partial results with error indicators
Also, the TODO comment on line 282 needs clarification - what specific signer check is needed?
pub fn sign_txn_in_batch(
&self,
raw_txns: Vec<RawUserTransaction>,
) -> AccountResult<Vec<SignedUserTransaction>> {
let mut signed_transactions = vec![];
+ let mut errors = vec![];
for raw_txn in raw_txns {
- let signer_address = raw_txn.sender(); // TODO: check if the signer is the same as the one in the txi
+ let signer_address = raw_txn.sender();
let pass = self.key_cache.write().get_pass(&signer_address);
match pass {
- None => return Err(AccountError::AccountLocked(signer_address)),
+ None => errors.push((signer_address, AccountError::AccountLocked(signer_address))),
Some(p) => {
let account = Account::load(signer_address, Some(p), self.store.clone())?
.ok_or(AccountError::AccountNotExist(signer_address))?;
signed_transactions.push(
account
.sign_txn(raw_txn)
.map_err(AccountError::TransactionSignError)?,
);
}
}
}
+ if !errors.is_empty() {
+ // Return a batch error with all locked accounts
+ return Err(AccountError::BatchOperationFailed(errors));
+ }
Ok(signed_transactions)
}Note: This would require adding a new BatchOperationFailed variant to the AccountError enum.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In account/src/account_manager.rs around lines 276-298, the method currently
returns early on the first locked account which yields partial, opaque results;
change it to process all transactions collecting per-transaction successes and
errors and return a single failure variant that contains all errors and any
partial signed transactions. Specifically: add an
AccountError::BatchOperationFailed(Vec<(AccountAddress, AccountError)>,
Vec<SignedUserTransaction>) variant, iterate all raw_txns without early returns,
for each txn attempt to get the pass and load/sign the account—on failure push
(signer_address, error) into an errors vec, on success push the signed txn into
a results vec; at the end, return Ok(results) if errors.is_empty() else
Err(AccountError::BatchOperationFailed(errors, results)). Also address the TODO
by explicitly validating that the signer_address equals the transaction
sender/authenticator field (e.g., check raw_txn.sender() vs the signer in the tx
info) and treat mismatches as an error recorded in the errors vec.
| pub fn state_reader(&self) -> Arc<impl ChainStateReader> { | ||
| self.state.clone() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This signature will not compile: cannot return Arc.
Rust doesn’t support nesting impl Trait inside other types. Use a trait object instead.
Apply:
- pub fn state_reader(&self) -> Arc<impl ChainStateReader> {
- self.state.clone()
- }
+ pub fn state_reader(&self) -> Arc<dyn ChainStateReader + Send + Sync> {
+ self.state.clone()
+ }Note: ChainStateDB must implement ChainStateReader + Send + Sync + 'static for this to coerce. This keeps the field concrete (Arc) while exposing a read-only trait object.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn state_reader(&self) -> Arc<impl ChainStateReader> { | |
| self.state.clone() | |
| } | |
| pub fn state_reader(&self) -> Arc<dyn ChainStateReader + Send + Sync> { | |
| self.state.clone() | |
| } |
| let now = SystemTime::now(); | ||
| now.duration_since(UNIX_EPOCH) | ||
| .expect("time error") | ||
| .as_secs() | ||
| + DEFAULT_EXPIRATION_TIME | ||
| // let node_info = self | ||
| // .client | ||
| // .node_info() | ||
| // .expect("node_info() should not failed"); | ||
| // node_info.now_seconds + DEFAULT_EXPIRATION_TIME | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use chain/node time for expiration; fall back to system clock.
Relying solely on SystemTime can drift from the node’s canonical time and cause premature/expired txns. Prefer node_info().now_seconds, with a system-time fallback on error.
Apply this diff to make expiration robust:
- fn fetch_expiration_time(&self) -> u64 {
- let now = SystemTime::now();
- now.duration_since(UNIX_EPOCH)
- .expect("time error")
- .as_secs()
- + DEFAULT_EXPIRATION_TIME
- // let node_info = self
- // .client
- // .node_info()
- // .expect("node_info() should not failed");
- // node_info.now_seconds + DEFAULT_EXPIRATION_TIME
- }
+ fn fetch_expiration_time(&self) -> u64 {
+ match self.client.node_info() {
+ Ok(info) => info.now_seconds + DEFAULT_EXPIRATION_TIME,
+ Err(_) => {
+ let now = SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .expect("time error")
+ .as_secs();
+ now + DEFAULT_EXPIRATION_TIME
+ }
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let now = SystemTime::now(); | |
| now.duration_since(UNIX_EPOCH) | |
| .expect("time error") | |
| .as_secs() | |
| + DEFAULT_EXPIRATION_TIME | |
| // let node_info = self | |
| // .client | |
| // .node_info() | |
| // .expect("node_info() should not failed"); | |
| // node_info.now_seconds + DEFAULT_EXPIRATION_TIME | |
| } | |
| fn fetch_expiration_time(&self) -> u64 { | |
| match self.client.node_info() { | |
| Ok(info) => info.now_seconds + DEFAULT_EXPIRATION_TIME, | |
| Err(_) => { | |
| let now = SystemTime::now() | |
| .duration_since(UNIX_EPOCH) | |
| .expect("time error") | |
| .as_secs(); | |
| now + DEFAULT_EXPIRATION_TIME | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In cmd/tx-factory/src/main.rs around lines 263 to 273, replace the current
unconditional SystemTime-based expiration with logic that first attempts to read
the node time via self.client.node_info().now_seconds and uses that +
DEFAULT_EXPIRATION_TIME; if node_info() fails, fall back to computing the
expiration from SystemTime::now().duration_since(UNIX_EPOCH).expect("time
error").as_secs() + DEFAULT_EXPIRATION_TIME. Make the node_info call
non-panicking (handle Result), propagate or log the error if it occurs, and
ensure the returned expiration is a u64 consistent with the rest of the
function.
| fn submit_transaction_in_batch( | ||
| &self, | ||
| txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>, | ||
| blocking: bool, | ||
| ) -> Result<()> { | ||
| info!("going to unlock accounts"); | ||
| self.client.account_unlock_in_batch( | ||
| txns.iter() | ||
| .map(|(sender, _)| (*sender, self.account_password.clone())) | ||
| .collect(), | ||
| self.unlock_duration, | ||
| )?; | ||
|
|
||
| let signed_transactions = self.client.account_sign_txn_in_batch( | ||
| txns.iter() | ||
| .flat_map(|(_, raw_txns)| raw_txns.clone()) | ||
| .collect(), | ||
| )?; | ||
|
|
||
| let hashes = self.client.submit_transactions(signed_transactions)?; | ||
| info!("submitted {} txns", hashes.len()); | ||
|
|
||
| if blocking { | ||
| for hash in hashes { | ||
| self.client | ||
| .watch_txn(hash, Some(Duration::from_secs(self.watch_timeout as u64)))?; | ||
| } | ||
| } | ||
| Ok(()) | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Batch flow: validate unlock/sign results and preserve 1:1 counts.
- Check that
unlock_in_batchreturns one result per sender entry; fail fast if not. - Verify
sign_txn_in_batchreturns exactly the number of raw txns submitted; otherwise some txns weren’t signed. - Optional: dedup senders in unlock batch to avoid redundant unlocks if the caller accidentally repeats senders.
Apply this diff to add basic count validation:
fn submit_transaction_in_batch(
&self,
txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>,
blocking: bool,
) -> Result<()> {
info!("going to unlock accounts");
- self.client.account_unlock_in_batch(
+ let unlocked = self.client.account_unlock_in_batch(
txns.iter()
.map(|(sender, _)| (*sender, self.account_password.clone()))
.collect(),
self.unlock_duration,
)?;
+ if unlocked.len() != txns.len() {
+ warn!(
+ "unlock_in_batch returned {} accounts, expected {}",
+ unlocked.len(),
+ txns.len()
+ );
+ }
- let signed_transactions = self.client.account_sign_txn_in_batch(
+ let signed_transactions = self.client.account_sign_txn_in_batch(
txns.iter()
.flat_map(|(_, raw_txns)| raw_txns.clone())
.collect(),
)?;
+ let expected = txns.iter().map(|(_, raw_txns)| raw_txns.len()).sum::<usize>();
+ if signed_transactions.len() != expected {
+ bail!(
+ "signed_txn count mismatch: expected {}, got {}",
+ expected,
+ signed_transactions.len()
+ );
+ }If you want sender dedup as a safety net, I can provide a patch using a HashSet.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn submit_transaction_in_batch( | |
| &self, | |
| txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>, | |
| blocking: bool, | |
| ) -> Result<()> { | |
| info!("going to unlock accounts"); | |
| self.client.account_unlock_in_batch( | |
| txns.iter() | |
| .map(|(sender, _)| (*sender, self.account_password.clone())) | |
| .collect(), | |
| self.unlock_duration, | |
| )?; | |
| let signed_transactions = self.client.account_sign_txn_in_batch( | |
| txns.iter() | |
| .flat_map(|(_, raw_txns)| raw_txns.clone()) | |
| .collect(), | |
| )?; | |
| let hashes = self.client.submit_transactions(signed_transactions)?; | |
| info!("submitted {} txns", hashes.len()); | |
| if blocking { | |
| for hash in hashes { | |
| self.client | |
| .watch_txn(hash, Some(Duration::from_secs(self.watch_timeout as u64)))?; | |
| } | |
| } | |
| Ok(()) | |
| } | |
| fn submit_transaction_in_batch( | |
| &self, | |
| txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>, | |
| blocking: bool, | |
| ) -> Result<()> { | |
| info!("going to unlock accounts"); | |
| let unlocked = self.client.account_unlock_in_batch( | |
| txns.iter() | |
| .map(|(sender, _)| (*sender, self.account_password.clone())) | |
| .collect(), | |
| self.unlock_duration, | |
| )?; | |
| if unlocked.len() != txns.len() { | |
| warn!( | |
| "unlock_in_batch returned {} accounts, expected {}", | |
| unlocked.len(), | |
| txns.len() | |
| ); | |
| } | |
| let signed_transactions = self.client.account_sign_txn_in_batch( | |
| txns.iter() | |
| .flat_map(|(_, raw_txns)| raw_txns.clone()) | |
| .collect(), | |
| )?; | |
| let expected = txns.iter().map(|(_, raw_txns)| raw_txns.len()).sum::<usize>(); | |
| if signed_transactions.len() != expected { | |
| bail!( | |
| "signed_txn count mismatch: expected {}, got {}", | |
| expected, | |
| signed_transactions.len() | |
| ); | |
| } | |
| let hashes = self.client.submit_transactions(signed_transactions)?; | |
| info!("submitted {} txns", hashes.len()); | |
| if blocking { | |
| for hash in hashes { | |
| self.client | |
| .watch_txn(hash, Some(Duration::from_secs(self.watch_timeout as u64)))?; | |
| } | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
In cmd/tx-factory/src/main.rs around lines 363 to 393, the batch flow assumes
unlock and sign RPCs succeed for every input but does not validate result
counts; update the function to (1) validate account_unlock_in_batch returns
exactly one result per provided sender (fail fast with a clear Err if counts
differ), (2) verify account_sign_txn_in_batch returns the same number of signed
transactions as the number of raw txns passed in (fail fast if counts differ),
and (3) preserve the 1:1 mapping between input raw transactions and returned
signed transactions before calling submit_transactions; optionally deduplicate
senders before calling account_unlock_in_batch (using a HashSet) to avoid
redundant unlocks if callers repeat senders.
| fn next_sequence_number_in_batch( | ||
| &self, | ||
| addresses: Vec<AccountAddress>, | ||
| ) -> Result<Vec<(AccountAddress, Option<u64>)>> { | ||
| let seq_numbers = self | ||
| .client | ||
| .next_sequence_number_in_batch(addresses)? | ||
| .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?; | ||
| Ok(seq_numbers | ||
| .into_iter() | ||
| .map(|(address, seq_number)| match seq_number { | ||
| Some(seq_number) => (address, Some(seq_number)), | ||
| None => { | ||
| let state_reader = self | ||
| .client | ||
| .state_reader(StateRootOption::Latest) | ||
| .expect("state_reader error"); | ||
| let account_resource = state_reader | ||
| .get_account_resource(address) | ||
| .expect("get_account_resource error"); | ||
| let seq = account_resource.map(|resource| resource.sequence_number()); | ||
| (address, seq) | ||
| } | ||
| }) | ||
| .collect()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid panics in batch sequence fallback; propagate errors instead.
Using expect within the iterator will crash the process on transient RPC/state errors. Refactor to return a Result with proper error context.
Apply this diff to make the function fallible without panicking:
- fn next_sequence_number_in_batch(
- &self,
- addresses: Vec<AccountAddress>,
- ) -> Result<Vec<(AccountAddress, Option<u64>)>> {
- let seq_numbers = self
- .client
- .next_sequence_number_in_batch(addresses)?
- .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
- Ok(seq_numbers
- .into_iter()
- .map(|(address, seq_number)| match seq_number {
- Some(seq_number) => (address, Some(seq_number)),
- None => {
- let state_reader = self
- .client
- .state_reader(StateRootOption::Latest)
- .expect("state_reader error");
- let account_resource = state_reader
- .get_account_resource(address)
- .expect("get_account_resource error");
- let seq = account_resource.map(|resource| resource.sequence_number());
- (address, seq)
- }
- })
- .collect())
- }
+ fn next_sequence_number_in_batch(
+ &self,
+ addresses: Vec<AccountAddress>,
+ ) -> Result<Vec<(AccountAddress, Option<u64>)>> {
+ let seq_numbers = self
+ .client
+ .next_sequence_number_in_batch(addresses.clone())?
+ .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
+ let state_reader = self.client.state_reader(StateRootOption::Latest)?;
+ let mut out = Vec::with_capacity(seq_numbers.len());
+ for (address, seq_number) in seq_numbers {
+ if let Some(seq) = seq_number {
+ out.push((address, Some(seq)));
+ } else {
+ let seq = state_reader
+ .get_account_resource(address)?
+ .map(|resource| resource.sequence_number());
+ out.push((address, seq));
+ }
+ }
+ Ok(out)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn next_sequence_number_in_batch( | |
| &self, | |
| addresses: Vec<AccountAddress>, | |
| ) -> Result<Vec<(AccountAddress, Option<u64>)>> { | |
| let seq_numbers = self | |
| .client | |
| .next_sequence_number_in_batch(addresses)? | |
| .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?; | |
| Ok(seq_numbers | |
| .into_iter() | |
| .map(|(address, seq_number)| match seq_number { | |
| Some(seq_number) => (address, Some(seq_number)), | |
| None => { | |
| let state_reader = self | |
| .client | |
| .state_reader(StateRootOption::Latest) | |
| .expect("state_reader error"); | |
| let account_resource = state_reader | |
| .get_account_resource(address) | |
| .expect("get_account_resource error"); | |
| let seq = account_resource.map(|resource| resource.sequence_number()); | |
| (address, seq) | |
| } | |
| }) | |
| .collect()) | |
| } | |
| fn next_sequence_number_in_batch( | |
| &self, | |
| addresses: Vec<AccountAddress>, | |
| ) -> Result<Vec<(AccountAddress, Option<u64>)>> { | |
| let seq_numbers = self | |
| .client | |
| .next_sequence_number_in_batch(addresses.clone())? | |
| .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?; | |
| let state_reader = self.client.state_reader(StateRootOption::Latest)?; | |
| let mut out = Vec::with_capacity(seq_numbers.len()); | |
| for (address, seq_number) in seq_numbers { | |
| if let Some(seq) = seq_number { | |
| out.push((address, Some(seq))); | |
| } else { | |
| let seq = state_reader | |
| .get_account_resource(address)? | |
| .map(|resource| resource.sequence_number()); | |
| out.push((address, seq)); | |
| } | |
| } | |
| Ok(out) | |
| } |
| impl EventHandler<Self, BlockTemplateResponse> for MinerService { | ||
| fn handle_event( | ||
| &mut self, | ||
| block_template: BlockTemplateResponse, | ||
| ctx: &mut ServiceContext<Self>, | ||
| ) { | ||
| let config = self.config.clone(); | ||
| let addr = ctx | ||
| .service_ref::<Self>() | ||
| .expect("failed to get miner service ref") | ||
| .clone(); | ||
| let parent = block_template.parent; | ||
| let block_template = block_template.template; | ||
| let block_time_gap = block_template.timestamp - parent.timestamp(); | ||
|
|
||
| ctx.spawn(async move { | ||
| let should_skip = | ||
| config.miner.is_disable_mint_empty_block() && block_time_gap < 3600 * 1000; | ||
| if should_skip { | ||
| info!("Skipping minting empty block"); | ||
| } else if let Err(e) = addr | ||
| .send(DispatchMintBlockTemplate { block_template }) | ||
| .await | ||
| { | ||
| warn!("Failed to dispatch block template: {}", e); | ||
| } | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix skipping logic and time-gap subtraction in BlockTemplateResponse handler.
- The empty-block check was dropped. As written, the miner will skip minting even when the template contains transactions if
disable_mint_empty_blockis set and the time gap is small. That’s a functional regression. block_template.timestamp - parent.timestamp()can underflow; usesaturating_sub.
Apply this diff to restore intended behavior and avoid underflow:
fn handle_event(
&mut self,
block_template: BlockTemplateResponse,
ctx: &mut ServiceContext<Self>,
) {
let config = self.config.clone();
let addr = ctx
.service_ref::<Self>()
.expect("failed to get miner service ref")
.clone();
let parent = block_template.parent;
let block_template = block_template.template;
- let block_time_gap = block_template.timestamp - parent.timestamp();
+ let block_time_gap = block_template.timestamp.saturating_sub(parent.timestamp());
ctx.spawn(async move {
- let should_skip =
- config.miner.is_disable_mint_empty_block() && block_time_gap < 3600 * 1000;
+ let should_skip = config.miner.is_disable_mint_empty_block()
+ && block_template.body.transactions.is_empty()
+ && block_time_gap < 3600 * 1000;
if should_skip {
info!("Skipping minting empty block");
} else if let Err(e) = addr
.send(DispatchMintBlockTemplate { block_template })
.await
{
warn!("Failed to dispatch block template: {}", e);
}
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl EventHandler<Self, BlockTemplateResponse> for MinerService { | |
| fn handle_event( | |
| &mut self, | |
| block_template: BlockTemplateResponse, | |
| ctx: &mut ServiceContext<Self>, | |
| ) { | |
| let config = self.config.clone(); | |
| let addr = ctx | |
| .service_ref::<Self>() | |
| .expect("failed to get miner service ref") | |
| .clone(); | |
| let parent = block_template.parent; | |
| let block_template = block_template.template; | |
| let block_time_gap = block_template.timestamp - parent.timestamp(); | |
| ctx.spawn(async move { | |
| let should_skip = | |
| config.miner.is_disable_mint_empty_block() && block_time_gap < 3600 * 1000; | |
| if should_skip { | |
| info!("Skipping minting empty block"); | |
| } else if let Err(e) = addr | |
| .send(DispatchMintBlockTemplate { block_template }) | |
| .await | |
| { | |
| warn!("Failed to dispatch block template: {}", e); | |
| } | |
| }); | |
| } | |
| } | |
| impl EventHandler<Self, BlockTemplateResponse> for MinerService { | |
| fn handle_event( | |
| &mut self, | |
| block_template: BlockTemplateResponse, | |
| ctx: &mut ServiceContext<Self>, | |
| ) { | |
| let config = self.config.clone(); | |
| let addr = ctx | |
| .service_ref::<Self>() | |
| .expect("failed to get miner service ref") | |
| .clone(); | |
| let parent = block_template.parent; | |
| let block_template = block_template.template; | |
| let block_time_gap = block_template.timestamp.saturating_sub(parent.timestamp()); | |
| ctx.spawn(async move { | |
| let should_skip = config.miner.is_disable_mint_empty_block() | |
| && block_template.body.transactions.is_empty() | |
| && block_time_gap < 3600 * 1000; | |
| if should_skip { | |
| info!("Skipping minting empty block"); | |
| } else if let Err(e) = addr | |
| .send(DispatchMintBlockTemplate { block_template }) | |
| .await | |
| { | |
| warn!("Failed to dispatch block template: {}", e); | |
| } | |
| }); | |
| } | |
| } |
🤖 Prompt for AI Agents
In miner/src/lib.rs around lines 352 to 380, the handler dropped the empty-block
check and used a subtraction that can underflow; restore the original logic so
we only skip minting when the template contains no transactions AND
config.miner.is_disable_mint_empty_block() is true, and compute the time gap
using saturating_sub (e.g.,
block_template.timestamp.saturating_sub(parent.timestamp())) before comparing to
the threshold; keep the spawn/dispatch flow and ensure the condition checks both
emptiness and the time gap to avoid the regression and underflow.
miner/tests/miner_test.rs
Outdated
| // use starcoin_account_service::AccountService; | ||
| // use starcoin_config::NodeConfig; | ||
| // use starcoin_consensus::Consensus; | ||
| // use starcoin_dag::service::pruning_point_service::PruningPointService; | ||
| // use starcoin_genesis::Genesis; | ||
| // use starcoin_miner::{ | ||
| // BlockBuilderService, BlockHeaderExtra, BlockTemplateRequest, MinerService, NewHeaderChannel, | ||
| // NewHeaderService, SubmitSealRequest, | ||
| // }; | ||
| // use starcoin_service_registry::{RegistryAsyncService, RegistryService}; | ||
| // use starcoin_storage::BlockStore; | ||
| // use starcoin_sync::block_connector::BlockConnectorService; | ||
| // use starcoin_txpool::TxPoolService; | ||
| // use starcoin_types::{system_events::GenerateBlockEvent, U256}; | ||
| // use std::sync::Arc; | ||
| // use std::time::Duration; | ||
| // use tokio::time::sleep; | ||
|
|
||
| #[stest::test] | ||
| async fn test_miner_service() { | ||
| let mut config = NodeConfig::random_for_dag_test(); | ||
| config.miner.disable_mint_empty_block = Some(false); | ||
| let registry = RegistryService::launch(); | ||
| let node_config = Arc::new(config.clone()); | ||
| registry.put_shared(node_config.clone()).await.unwrap(); | ||
| let (storage, _chain_info, genesis, dag) = | ||
| Genesis::init_storage_for_test(config.net()).unwrap(); | ||
| registry.put_shared(storage.clone()).await.unwrap(); | ||
| registry.put_shared(dag).await.unwrap(); | ||
| // #[stest::test] | ||
| // async fn test_miner_service() { | ||
| // let mut config = NodeConfig::random_for_dag_test(); | ||
| // config.miner.disable_mint_empty_block = Some(false); | ||
| // let registry = RegistryService::launch(); | ||
| // let node_config = Arc::new(config.clone()); | ||
| // registry.put_shared(node_config.clone()).await.unwrap(); | ||
| // let (storage, _chain_info, genesis, dag) = | ||
| // Genesis::init_storage_for_test(config.net()).unwrap(); | ||
| // registry.put_shared(storage.clone()).await.unwrap(); | ||
| // registry.put_shared(dag).await.unwrap(); | ||
|
|
||
| let genesis_hash = genesis.block().id(); | ||
| registry.put_shared(genesis).await.unwrap(); | ||
| let chain_header = storage | ||
| .get_block_header_by_hash(genesis_hash) | ||
| .unwrap() | ||
| .unwrap(); | ||
| // let genesis_hash = genesis.block().id(); | ||
| // registry.put_shared(genesis).await.unwrap(); | ||
| // let chain_header = storage | ||
| // .get_block_header_by_hash(genesis_hash) | ||
| // .unwrap() | ||
| // .unwrap(); | ||
|
|
||
| let txpool = TxPoolService::new(node_config.clone(), storage.clone(), chain_header, None); | ||
| registry.put_shared(txpool).await.unwrap(); | ||
| registry | ||
| .register_mocker(AccountService::mock().unwrap()) | ||
| .await | ||
| .unwrap(); | ||
| // let txpool = TxPoolService::new(node_config.clone(), storage.clone(), chain_header, None); | ||
| // registry.put_shared(txpool).await.unwrap(); | ||
| // registry | ||
| // .register_mocker(AccountService::mock().unwrap()) | ||
| // .await | ||
| // .unwrap(); | ||
|
|
||
| registry.register::<PruningPointService>().await.unwrap(); | ||
| // registry.register::<PruningPointService>().await.unwrap(); | ||
|
|
||
| registry | ||
| .register::<BlockConnectorService<TxPoolService>>() | ||
| .await | ||
| .unwrap(); | ||
| // registry | ||
| // .register::<BlockConnectorService<TxPoolService>>() | ||
| // .await | ||
| // .unwrap(); | ||
|
|
||
| registry.put_shared(NewHeaderChannel::new()).await.unwrap(); | ||
| registry.register::<NewHeaderService>().await.unwrap(); | ||
| let template = registry.register::<BlockBuilderService>().await.unwrap(); | ||
| let response = template | ||
| .send(BlockTemplateRequest) | ||
| .await | ||
| .unwrap() | ||
| .unwrap() | ||
| .template; | ||
| assert_eq!(response.number, 1); | ||
| // registry.put_shared(NewHeaderChannel::new()).await.unwrap(); | ||
| // registry.register::<NewHeaderService>().await.unwrap(); | ||
| // let template = registry.register::<BlockBuilderService>().await.unwrap(); | ||
| // let response = template | ||
| // .send(BlockTemplateRequest) | ||
| // .await | ||
| // .unwrap() | ||
| // .unwrap() | ||
| // .template; | ||
| // assert_eq!(response.number, 1); | ||
|
|
||
| let miner = registry.register::<MinerService>().await; | ||
| assert!(miner.is_ok()); | ||
| // let miner = registry.register::<MinerService>().await; | ||
| // assert!(miner.is_ok()); | ||
|
|
||
| let miner = miner.unwrap(); | ||
| miner.notify(GenerateBlockEvent::new_break(false)).unwrap(); | ||
| // let miner = miner.unwrap(); | ||
| // miner.notify(GenerateBlockEvent::new_break(false)).unwrap(); | ||
|
|
||
| sleep(Duration::from_millis(200)).await; | ||
| miner.notify(GenerateBlockEvent::new_break(true)).unwrap(); | ||
| sleep(Duration::from_millis(200)).await; | ||
| // Generate a event | ||
| let diff = U256::from(1024); | ||
| let minting_blob = vec![0u8; 76]; | ||
| // sleep(Duration::from_millis(200)).await; | ||
| // miner.notify(GenerateBlockEvent::new_break(true)).unwrap(); | ||
| // sleep(Duration::from_millis(200)).await; | ||
| // // Generate a event | ||
| // let diff = U256::from(1024); | ||
| // let minting_blob = vec![0u8; 76]; | ||
|
|
||
| let nonce = config | ||
| .net() | ||
| .genesis_config() | ||
| .consensus() | ||
| .solve_consensus_nonce(&minting_blob, diff, config.net().time_service().as_ref()); | ||
| miner | ||
| .try_send(SubmitSealRequest::new( | ||
| minting_blob, | ||
| nonce, | ||
| BlockHeaderExtra::new([0u8; 4]), | ||
| )) | ||
| .unwrap(); | ||
| // let nonce = config | ||
| // .net() | ||
| // .genesis_config() | ||
| // .consensus() | ||
| // .solve_consensus_nonce(&minting_blob, diff, config.net().time_service().as_ref()); | ||
| // miner | ||
| // .try_send(SubmitSealRequest::new( | ||
| // minting_blob, | ||
| // nonce, | ||
| // BlockHeaderExtra::new([0u8; 4]), | ||
| // )) | ||
| // .unwrap(); | ||
|
|
||
| registry.shutdown_system().await.unwrap(); | ||
| } | ||
| // registry.shutdown_system().await.unwrap(); | ||
| // } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
All miner service tests are commented out — restores needed to prevent coverage/regression gaps
The entire test body is disabled via comments. This removes coverage for the miner pipeline and block template creation, risking regressions going unnoticed (especially given the recent async/event-driven refactor).
Action options:
- Port the test to the new channel-driven template flow and MinerService notifications.
- If temporarily unstable, mark with #[ignore] and track with an issue rather than commenting out.
- If obsolete, delete the file to avoid dead code drift.
Proposed minimal restoration targeting the new flow (adjust types/names as per the refactor):
-// #[stest::test]
-// async fn test_miner_service() {
-// ...
-// let template = registry.register::<BlockBuilderService>().await.unwrap();
-// let response = template
-// .send(BlockTemplateRequest)
-// .await
-// .unwrap()
-// .unwrap()
-// .template;
-// assert_eq!(response.number, 1);
-// ...
-// }
+#[stest::test]
+async fn test_miner_service_template_smoke() {
+ // Arrange
+ let mut config = NodeConfig::random_for_dag_test();
+ config.miner.disable_mint_empty_block = Some(false);
+ let registry = RegistryService::launch();
+ let node_config = Arc::new(config.clone());
+ registry.put_shared(node_config.clone()).await.unwrap();
+ let (storage, _chain_info, genesis, dag) = Genesis::init_storage_for_test(config.net()).unwrap();
+ registry.put_shared(storage.clone()).await.unwrap();
+ registry.put_shared(dag).await.unwrap();
+ registry.put_shared(genesis).await.unwrap();
+
+ // Register new header/template pipeline
+ registry.put_shared(NewHeaderChannel::new()).await.unwrap();
+ registry.register::<NewHeaderService>().await.unwrap();
+ let builder = registry.register::<BlockBuilderService>().await.unwrap();
+
+ // Act
+ let resp = builder.send(BlockTemplateRequest).await.unwrap().unwrap();
+
+ // Assert
+ assert_eq!(resp.template.number, 1);
+ registry.shutdown_system().await.unwrap();
+}If you’d like, I can open a tracking issue and help adapt this to the exact new API surface.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In miner/tests/miner_test.rs around lines 4-93 the entire async miner
integration test is commented out which removes coverage; restore the test by
re-enabling it and adapting to the current API surface (uncomment and update
imports, types and service names to match the refactor), or if it is currently
unstable, re-enable as a test with #[stest::test] and #[ignore] and create a
short TODO/issue reference in the file; specifically: uncomment the test body,
update NodeConfig/Genesis/Registry/Service type names and channel calls to the
new channel-driven block template and MinerService APIs, ensure async runtime
attributes remain, and either run and fix compile errors or mark #[ignore] and
add an issue number before committing.
| pub fn account_unlock_in_batch( | ||
| &self, | ||
| batch: Vec<(AccountAddress, String)>, | ||
| duration: std::time::Duration, | ||
| ) -> anyhow::Result<Vec<AccountInfo>> { | ||
| self.call_rpc_blocking(|inner| { | ||
| inner | ||
| .account_client | ||
| .unlock_in_batch(batch, Some(duration.as_secs() as u32)) | ||
| }) | ||
| .map_err(map_err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against u64→u32 truncation for unlock duration.
Casting with as u32 may silently truncate for large durations. Clamp to u32::MAX or error out.
Apply this diff to clamp safely:
- self.call_rpc_blocking(|inner| {
- inner
- .account_client
- .unlock_in_batch(batch, Some(duration.as_secs() as u32))
- })
+ self.call_rpc_blocking(|inner| {
+ let secs = duration.as_secs().min(u32::MAX as u64) as u32;
+ inner
+ .account_client
+ .unlock_in_batch(batch, Some(secs))
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn account_unlock_in_batch( | |
| &self, | |
| batch: Vec<(AccountAddress, String)>, | |
| duration: std::time::Duration, | |
| ) -> anyhow::Result<Vec<AccountInfo>> { | |
| self.call_rpc_blocking(|inner| { | |
| inner | |
| .account_client | |
| .unlock_in_batch(batch, Some(duration.as_secs() as u32)) | |
| }) | |
| .map_err(map_err) | |
| } | |
| pub fn account_unlock_in_batch( | |
| &self, | |
| batch: Vec<(AccountAddress, String)>, | |
| duration: std::time::Duration, | |
| ) -> anyhow::Result<Vec<AccountInfo>> { | |
| self.call_rpc_blocking(|inner| { | |
| let secs = duration.as_secs().min(u32::MAX as u64) as u32; | |
| inner | |
| .account_client | |
| .unlock_in_batch(batch, Some(secs)) | |
| }) | |
| .map_err(map_err) | |
| } |
🤖 Prompt for AI Agents
In rpc/client/src/lib.rs around lines 450 to 461, the direct cast
duration.as_secs() as u32 can silently truncate large durations; instead compute
let secs = duration.as_secs(); clamp it to u32::MAX if secs > u32::MAX (e.g. let
secs_u32 = if secs > u64::from(u32::MAX) { u32::MAX } else { secs as u32 });
then pass Some(secs_u32) to unlock_in_batch so no silent truncation occurs
(alternatively return an error when secs exceeds u32::MAX if you prefer failing
fast).
| fn submit_transactions( | ||
| &self, | ||
| txns: Vec<SignedUserTransaction>, | ||
| ) -> FutureResult<Vec<HashValue>> { | ||
| let txn_hashes = txns.iter().map(|txn| txn.id()).collect(); | ||
| let result: Result<(), jsonrpc_core::Error> = self | ||
| .service | ||
| .add_txns(txns) | ||
| .pop() | ||
| .expect("txpool should return result") | ||
| .map_err(convert_to_rpc_error); | ||
|
|
||
| Box::pin(futures::future::ready(result.map(|_| txn_hashes))) | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batch submit ignores per-transaction errors; only the last result is checked.
Using pop() validates just the last txn. Earlier failures will be silently acknowledged, and all hashes are returned as “accepted.”
Apply this diff to surface the first error and only succeed when all txns are accepted:
fn submit_transactions(
&self,
txns: Vec<SignedUserTransaction>,
) -> FutureResult<Vec<HashValue>> {
- let txn_hashes = txns.iter().map(|txn| txn.id()).collect();
- let result: Result<(), jsonrpc_core::Error> = self
- .service
- .add_txns(txns)
- .pop()
- .expect("txpool should return result")
- .map_err(convert_to_rpc_error);
-
- Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
+ if txns.is_empty() {
+ return Box::pin(futures::future::ok(Vec::new()));
+ }
+ let txn_hashes: Vec<HashValue> = txns.iter().map(|txn| txn.id()).collect();
+ let results = self.service.add_txns(txns);
+ // If any error exists, return the first mapped error.
+ let first_err = results.into_iter().find_map(|r| r.err());
+ let result: Result<(), jsonrpc_core::Error> = match first_err {
+ Some(err) => Err(convert_to_rpc_error(err)),
+ None => Ok(()),
+ };
+ Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
}Follow-up: add a unit test that mixes valid/invalid txns and asserts the RPC returns an error rather than partial success. I can draft the test scaffolding if helpful.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn submit_transactions( | |
| &self, | |
| txns: Vec<SignedUserTransaction>, | |
| ) -> FutureResult<Vec<HashValue>> { | |
| let txn_hashes = txns.iter().map(|txn| txn.id()).collect(); | |
| let result: Result<(), jsonrpc_core::Error> = self | |
| .service | |
| .add_txns(txns) | |
| .pop() | |
| .expect("txpool should return result") | |
| .map_err(convert_to_rpc_error); | |
| Box::pin(futures::future::ready(result.map(|_| txn_hashes))) | |
| } | |
| fn submit_transactions( | |
| &self, | |
| txns: Vec<SignedUserTransaction>, | |
| ) -> FutureResult<Vec<HashValue>> { | |
| if txns.is_empty() { | |
| return Box::pin(futures::future::ok(Vec::new())); | |
| } | |
| let txn_hashes: Vec<HashValue> = txns.iter().map(|txn| txn.id()).collect(); | |
| let results = self.service.add_txns(txns); | |
| // If any error exists, return the first mapped error. | |
| let first_err = results.into_iter().find_map(|r| r.err()); | |
| let result: Result<(), jsonrpc_core::Error> = match first_err { | |
| Some(err) => Err(convert_to_rpc_error(err)), | |
| None => Ok(()), | |
| }; | |
| Box::pin(futures::future::ready(result.map(|_| txn_hashes))) | |
| } |
🤖 Prompt for AI Agents
In rpc/server/src/module/txpool_rpc.rs around lines 46 to 60, the current batch
submit uses .pop() and only inspects the last transaction result so earlier
per-txn errors are ignored; change the logic to collect all results from
self.service.add_txns(txns), check them in order and if any entry is an Err
convert and return that first error as a jsonrpc_core::Error, otherwise return
the full list of txn_hashes; ensure you do not consume only the last element
(iterate or find on the collection), propagate the first encountered error via
convert_to_rpc_error, and update the function to only succeed when all txns are
accepted; also add a unit test that submits a mix of valid and invalid
transactions and asserts the RPC returns an error (no partial success).
| fn next_sequence_number_with_state( | ||
| &self, | ||
| address: AccountAddress, | ||
| header: &BlockHeader, | ||
| state_root: HashValue, | ||
| ) -> Option<u64>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Ensure cross-file naming/metrics align with “with_state.”
Elsewhere, timers/labels still reference “…with_header” per txpool/src/tx_pool_service_impl.rs. Align labels for consistency and searchability.
Run this to locate lingering “with_header” labels/usages:
🏁 Script executed:
#!/bin/bash
rg -nP -C2 'next_sequence_number_with_header|get_pending_with_header|with_header' --hiddenLength of output: 2629
Align “with_state” naming across API and service implementation
The API defines:
fn next_sequence_number_with_state(
&self,
address: AccountAddress,
state_root: HashValue,
) -> Option<u64>;but in txpool/src/tx_pool_service_impl.rs the service and its metrics still reference with_header. To maintain consistency and improve discoverability, all “with_header” identifiers should be renamed to “with_state”:
• In txpool/src/tx_pool_service_impl.rs around line 203:
- Change the metrics label from
.with_label_values(&["next_sequence_number_with_header"])
to
.with_label_values(&["next_sequence_number_with_state"]). - Update the call site
to
.next_sequence_number_with_header(address, state_root)
.next_sequence_number_with_state(address, state_root) ```.
• Immediately below (around line 393):
- Rename the function signature
to
pub(crate) fn next_sequence_number_with_header( &self, address: AccountAddress, state_root: HashValue, ) -> Option<u64> { … }
pub(crate) fn next_sequence_number_with_state( &self, address: AccountAddress, state_root: HashValue, ) -> Option<u64> { … } ```.
This ensures that method names and metric labels remain aligned with the API and simplifies future search and maintenance.
🤖 Prompt for AI Agents
In txpool/src/tx_pool_service_impl.rs around line 203 and again around line 393,
the service and metrics still use "with_header" while the API uses "with_state";
update the metric label call at ~line 203 from
.with_label_values(&["next_sequence_number_with_header"]) to
.with_label_values(&["next_sequence_number_with_state"]), change the call site
.next_sequence_number_with_header(address, state_root) to
.next_sequence_number_with_state(address, state_root), and rename the function
at ~line 393 from pub(crate) fn next_sequence_number_with_header(...) to
pub(crate) fn next_sequence_number_with_state(...) keeping the parameters and
body unchanged so names align with the API.
Benchmark for 951a57eClick to view benchmark
|
Benchmark for 64b1537Click to view benchmark
|
Benchmark for f90a031Click to view benchmark
|
Pull request type
Please check the type of change your PR introduces:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Other information
Summary by CodeRabbit
New Features
Refactor
Configuration
Chores
Tests