Skip to content

Conversation

@jackzhhuang
Copy link
Contributor

@jackzhhuang jackzhhuang commented Aug 25, 2025

Pull request type

Please check the type of change your PR introduces:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation content changes
  • Other (please describe):

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Other information

Summary by CodeRabbit

  • New Features

    • Added batch operations: sign multiple transactions, unlock multiple accounts, submit multiple transactions, and fetch next sequence numbers for multiple addresses via RPC/CLI.
  • Refactor

    • Mining and block template creation moved to an asynchronous, event-driven flow for improved responsiveness.
    • TxPool and related services now use state-root–based queries.
    • Synchronization and execution services refactored to async pipelines with improved state consistency checks.
  • Configuration

    • Increased base block gas limit; reduced max transactions per block.
  • Chores

    • Added async-std dependency to miner.
  • Tests

    • Disabled miner integration test.

@coderabbitai
Copy link

coderabbitai bot commented Aug 25, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds batch account operations (sign/unlock) across API, services, RPC, and client; refactors txpool interfaces from header-based to state-root-based with batch sequence APIs; overhauls miner template creation to an async, event-driven pipeline; adjusts chain execution/verification and OpenedBlock state sharing; updates sync execution to async; modifies constants and utilities.

Changes

Cohort / File(s) Summary
Account batch operations
account/api/src/message.rs, account/api/src/service.rs, account/service/src/service.rs, account/src/account_manager.rs, rpc/api/src/account/mod.rs, rpc/server/src/module/account_rpc.rs, rpc/client/src/lib.rs
Introduces batch sign/unlock request/response variants, trait methods, service handling, manager implementations, and RPC/client endpoints mirroring single-item flows.
TxPool state-root refactor + batch APIs
txpool/api/src/lib.rs, txpool/mock-service/src/lib.rs, txpool/src/pool/queue.rs, txpool/src/pool_client.rs, txpool/src/tx_pool_service_impl.rs, rpc/api/src/txpool/mod.rs, rpc/server/src/module/txpool_rpc.rs, rpc/client/src/lib.rs
Replaces BlockHeader inputs with state_root across APIs and implementations; adds batch next-sequence API; exposes RPC endpoints for batch submit and batch sequencing; updates client forwarding; adds queue batch computation.
Miner template pipeline overhaul
miner/src/create_block_template/block_builder_service.rs, miner/src/create_block_template/mod.rs, miner/src/create_block_template/process_transaction.rs, miner/src/lib.rs, miner/Cargo.toml, miner/tests/miner_test.rs
Converts template building to async, channel-driven flow; introduces transaction processing module and data structs; switches to EventHandler model; updates TemplateTxProvider to state-root; adds async-std dep; comments out miner test.
Chain execution and verification updates
chain/src/chain.rs, chain/api/src/errors.rs, chain/src/verifier/mod.rs, chain/open-block/src/lib.rs
Adds state-root consistency guard and logging tweaks in execution; introduces VerifyBlockField::Parents/PruningPoint and uses them in verifier; shares OpenedBlock state via Arc and returns Arc reader.
Sync execute service async pipeline
sync/src/block_connector/execute_service.rs
Refactors execute path to async with channel-based dispatch; tracks origin; waits for parent readiness; broadcasts results in periodic loop.
Tx factory batching and time source
cmd/tx-factory/src/main.rs
Switches expiration to system time; introduces batch sequence retrieval and batch submit/sign/unlock; removes initial-balance wait path; refactors stress test to batch flow.
Config constants
config/src/genesis_config.rs
Updates base block gas limit (50,000,000 → 500,040,000) and max txs per block (700 → 400).
Block relayer broadcast condition
block-relayer/src/block_relayer.rs
Removes early return on not-nearly-synced; always constructs/broadcasts compact block.
VM logging
vm/vm-runtime/src/starcoin_vm.rs
Adds logs around concurrency level in execute_block.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Miner as MinerService
  participant CBS as CreateBlockTemplate Service
  participant Proc as ProcessTransaction (async)
  participant Ch as Channel
  participant Bus as Event Bus

  Miner->>CBS: notify(BlockTemplateRequest)
  CBS->>Proc: spawn process(header, state_root, txs)
  Proc-->>Ch: send(ProcessHeaderTemplate)
  CBS->>CBS: periodic loop consumes channel
  CBS-->>Bus: broadcast(BlockTemplateResponse)
  Bus-->>Miner: BlockTemplateResponse event
  Miner->>Miner: compute gap / skip check
  Miner-->>Bus: DispatchMintBlockTemplate (if not skipped)
Loading
sequenceDiagram
  autonumber
  participant Client as RpcClient
  participant RPC as AccountRpc (Server)
  participant Svc as AccountAsyncService
  participant Acc as AccountService
  participant Mgr as AccountManager

  Client->>RPC: account.sign_txn_in_batch(raw_txns)
  RPC->>Svc: sign_txn_in_batch(raw_txns)
  Svc->>Acc: AccountRequest::SignTxnInBatch
  Acc->>Mgr: sign_txn_in_batch(txns)
  Mgr-->>Acc: Vec<SignedUserTransaction>
  Acc-->>Svc: AccountResponse::SignedTxnList
  Svc-->>RPC: Vec<SignedUserTransaction>
  RPC-->>Client: Vec<SignedUserTransaction)

  Client->>RPC: account.unlock_in_batch(batch, duration)
  RPC->>Svc: unlock_account_in_batch(batch, dur)
  Svc->>Acc: AccountRequest::UnlockAccountInBatch
  Acc->>Mgr: unlock_account_in_batch(batch, dur)
  Mgr-->>Acc: Vec<AccountInfo>
  Acc-->>Svc: AccountResponse::AccountList
  Svc-->>RPC: Vec<AccountInfo>
  RPC-->>Client: Vec<AccountInfo>
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120–180 minutes

Possibly related PRs

Suggested reviewers

  • sanlee42
  • jolestar
  • nkysg

Poem

Down burrows of code I hop with delight,
Bundles of txns signed in a single byte-night.
State roots I nibble, headers take flight,
Miners drum async through channels so bright.
With paws on the queue and ears tuned tight—
Batch, build, and broadcast: it’s hare-right! 🐇✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch transaction-fill

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jackzhhuang jackzhhuang marked this pull request as draft August 25, 2025 02:19
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cmd/tx-factory/src/main.rs (1)

517-560: Fix potential infinite loop on account creation failure.

continue; bypasses i += 1, risking a tight infinite loop if account_create keeps failing.

Apply this diff to ensure forward progress:

             let account = match self.client.account_create(self.account_password.clone()) {
                 Ok(account) => account,
                 Err(e) => {
                     error!("create account error: {}", e);
-                    continue;
+                    i += 1;
+                    continue;
                 }
             };
@@
-            i += 1;
+            i += 1;
🧹 Nitpick comments (52)
vm/vm-runtime/src/starcoin_vm.rs (2)

1667-1676: Reduce noise and add useful context to concurrency logs; also time the parallel path.

Logging the same message before and after execution at different levels is redundant and will spam logs per block at info level. Suggest: downgrade the pre-execution log to debug, include txn_count, and add an elapsed time in the post-execution log. This gives actionable signals without flooding prod logs. Also consider that set_concurrency_level_once already logs at info when the level is set.

Apply this diff within this block:

@@
-        if concurrency_level > 1 {
-            info!("TurboSTM executor concurrency_level {}", concurrency_level);
+        if concurrency_level > 1 {
+            let txn_count = transactions.len();
+            let start = Instant::now();
+            debug!(
+                "TurboSTM parallel execute start: concurrency_level={}, txns={}",
+                concurrency_level, txn_count
+            );
             let (result, _) = crate::parallel_executor::ParallelStarcoinVM::execute_block(
                 transactions,
                 state_view,
                 concurrency_level,
                 block_gas_limit,
                 metrics,
             )?;
-            debug!("TurboSTM executor concurrency_level {}", concurrency_level);
+            debug!(
+                "TurboSTM parallel execute end: concurrency_level={}, txns={}, elapsed_ms={}",
+                concurrency_level,
+                txn_count,
+                start.elapsed().as_millis()
+            );
             Ok(result)

And add the import at the top of the file (outside this hunk):

use std::time::Instant;

1675-1675: Differentiate the post-execution log message or remove it.

As written, this debug message repeats the same static text as the pre-execution log, providing little value. If you don’t adopt the timing suggestion above, at least change it to indicate completion, e.g., “parallel execute finished,” and include txn count.

-            debug!("TurboSTM executor concurrency_level {}", concurrency_level);
+            debug!("TurboSTM parallel execute finished; concurrency_level={}, txns={}", concurrency_level, /*transactions.len()*/ 0)

Note: you’ll need to capture transactions.len() before it’s moved into execute_block (or keep a separate count) to populate the placeholder.

config/src/genesis_config.rs (3)

757-758: Consensus parameters bumped: verify invariants and throughput targets before merging

  • G_BASE_BLOCK_GAS_LIMIT increased 10x to 500,040,000; several networks multiply this by 10 again (e.g., Test/Dev/Halley/DagTest), yielding 5,000,400,000 gas per block. Combined with a 1s base_block_time_target, this is a material change that can impact proposer latency, execution time, and fork rate.
  • G_MAX_TRANSACTION_PER_BLOCK reduced from 700 to 400. Ensure block construction logic uses the minimum of {gas budget, tx-count cap} and that txpool batching, RPC, and miner template generation are consistent with this new cap.

Please verify:

  • Block building still meets the 1s target under typical and worst-case gas usage with multi-thread execution enabled.
  • txpool, executor, and miner template code paths have no hard-coded assumptions (e.g., 700) and correctly read the new constant.
  • The invariant “block_gas_limit > maximum_number_of_gas_units (per-tx)” holds for all gas constant versions used by the networks, with a non-trivial safety margin.

Add a lightweight unit test to enforce the invariant and avoid future regressions. You can append this to the existing tests module in this file:

#[test]
fn test_block_gas_limit_exceeds_tx_gas_cap_for_all_schedules() {
    // Ensure base block gas limit is strictly greater than per-tx max across all schedules.
    // Safety margin is expected; adjust as needed if schedules change.
    let v1_max = G_GAS_CONSTANTS_V1.maximum_number_of_gas_units;
    let v2_max = G_GAS_CONSTANTS_V2.maximum_number_of_gas_units;
    let v3_max = G_GAS_CONSTANTS_V3.maximum_number_of_gas_units;
    let test_max = G_TEST_GAS_CONSTANTS.maximum_number_of_gas_units;

    assert!(G_BASE_BLOCK_GAS_LIMIT > v1_max, "block gas limit must exceed V1 tx max");
    assert!(G_BASE_BLOCK_GAS_LIMIT > v2_max, "block gas limit must exceed V2 tx max");
    assert!(G_BASE_BLOCK_GAS_LIMIT > v3_max, "block gas limit must exceed V3 tx max");
    assert!(G_BASE_BLOCK_GAS_LIMIT > test_max, "block gas limit must exceed TEST tx max");

    // Also verify the networks that apply *10 multipliers.
    let multiplied = G_BASE_BLOCK_GAS_LIMIT.saturating_mul(10);
    assert!(multiplied > v1_max && multiplied > v2_max && multiplied > v3_max && multiplied > test_max,
        "multiplied block gas limit must exceed all per-tx maxima");
}

Optionally, consider deriving the tx-count cap from target block time and an empirical average gas-per-tx to avoid manual drift, e.g., cap ≈ min(hard-cap, block_gas_limit / avg_tx_gas). This keeps throughput aligned as schedules evolve.


757-757: Fix the comment grammar and document the rationale for the exact number

The magic “+40,000” looks intentional (to clear a 500,000,000 tx cap?). Please encode the intent in the comment.

-pub static G_BASE_BLOCK_GAS_LIMIT: u64 = 500_040_000; //must big than maximum_number_of_gas_units
+pub static G_BASE_BLOCK_GAS_LIMIT: u64 = 500_040_000; // must be greater than maximum_number_of_gas_units (kept slightly above the per-tx max to guarantee feasibility)

796-799: Cross-network sanity: multiplied gas limits will be 10x higher on test/dev/halley/dagtest—confirm this is intentional

Several network configs set base_block_gas_limit to G_BASE_BLOCK_GAS_LIMIT * 10. With the new base, that yields 5,000,400,000 gas per block on these nets. If the motivation was only to clear the per-tx max, consider whether the extra 10x multiplier is still needed for stress testing, or reduce it to keep test cycles and CI faster.

Also applies to: 850-853, 907-910, 968-971, 1031-1033, 1095-1098, 1170-1173, 1229-1232

block-relayer/src/block_relayer.rs (4)

82-85: Re-enable sync gating or add rate-limiting to avoid broadcast storms during initial sync

Removing this guard will broadcast every historical block during catch-up (NewHeadBlock/NewDagBlock/NewBranch), increasing redundant network traffic and peer load. If this was intentional, consider rate-limiting or conditional broadcasting while syncing. Otherwise, restore the check.

Apply this minimal revert:

-        // if !self.is_nearly_synced() {
-        //     debug!("[block-relay] Ignore NewHeadBlock event because the node has not been synchronized yet.");
-        //     return;
-        // }
+        if !self.is_nearly_synced() {
+            debug!("[block-relay] Ignore NewHeadBlock event because the node has not been synchronized yet.");
+            return;
+        }

If you do need broadcasting during sync, alternatives:

  • Broadcast only every Nth block while syncing.
  • Broadcast only when block timestamp is within a freshness window of now.
  • Broadcast to a sampled subset of peers instead of all peers.

165-170: Swap placeholders in ensure! message (expected vs. collected lengths are flipped)

The error message prints collected first where it says “expect”, which is misleading during diagnosis.

-            ensure!(
-                collect_txns.len() == expect_txn_len,
-                "Fill compact block error, expect txn len: {}, but collect txn len: {}",
-                collect_txns.len(),
-                expect_txn_len
-            );
+            ensure!(
+                collect_txns.len() == expect_txn_len,
+                "Fill compact block error, expect txn len: {}, but collect txn len: {}",
+                expect_txn_len,
+                collect_txns.len()
+            );

111-135: Process prefilled txns before consulting the txpool (aligns with TODO and avoids redundant work)

Filling prefilled entries first reduces lookups and avoids overwriting txpool results. This also tends to reduce network fetches.

-            let mut missing_txn_short_ids = HashSet::new();
-            // Fill the block txns by tx pool
-            for (index, short_id) in compact_block.short_ids.iter().enumerate() {
-                if let Some(txn) = txpool.find_txn(&short_id.0) {
-                    filled_from_txpool += 1;
-                    txns[index] = Some(txn);
-                } else {
-                    missing_txn_short_ids.insert(short_id);
-                }
-            }
-
-            //TODO move prefilled before txpool
-
-            // Fill the block txns by prefilled txn
-            for prefilled_txn in compact_block.prefilled_txn {
-                if prefilled_txn.index as usize >= txns.len() {
-                    continue;
-                }
-                let id = prefilled_txn.tx.id();
-                txns[prefilled_txn.index as usize] = Some(prefilled_txn.tx);
-                filled_from_prefilled += 1;
-                missing_txn_short_ids.remove(&ShortId(id));
-            }
+            let mut missing_txn_short_ids = HashSet::new();
+            // Fill prefilled first
+            for prefilled_txn in compact_block.prefilled_txn {
+                if (prefilled_txn.index as usize) < txns.len() {
+                    txns[prefilled_txn.index as usize] = Some(prefilled_txn.tx);
+                    filled_from_prefilled += 1;
+                }
+            }
+            // Then consult txpool only for the remaining indices
+            for (index, short_id) in compact_block.short_ids.iter().enumerate() {
+                if txns[index].is_none() {
+                    if let Some(txn) = txpool.find_txn(&short_id.0) {
+                        filled_from_txpool += 1;
+                        txns[index] = Some(txn);
+                    } else {
+                        missing_txn_short_ids.insert(short_id);
+                    }
+                }
+            }

247-247: Typo in log message (“PeerCmpctBlockEvent”)

Minor logging polish for easier grep and consistency.

-                error!("[block-relay] process PeerCmpctBlockEvent error {:?}", e);
+                error!("[block-relay] process PeerCompactBlockEvent error {:?}", e);
miner/Cargo.toml (1)

37-37: Gate async-std behind a feature in miner

We confirmed that async-std is actively used in
miner/src/create_block_template/block_builder_service.rs:530 (async_std::task::spawn) from the grep output. To prevent accidental mixing with the Tokio-based parts of the workspace, please:

• In miner/Cargo.toml, change the dependency to:

-async-std = { workspace = true }
+async-std = { workspace = true, optional = true, default-features = false }

• Add a feature flag in the same file:

[features]
async-std-runtime = ["async-std"]

• Guard all async-std imports and calls (e.g. async_std::task::spawn) behind:

#[cfg(feature = "async-std-runtime")]

This confines the async-std runtime to code paths opt-in via the new feature and avoids subtle cross-runtime issues.

chain/api/src/errors.rs (2)

42-44: Good addition of granular VerifyBlockField variants.

Introducing Parents and PruningPoint improves error specificity in DAG verification. Please add short doc comments to clarify intended use (e.g., “parents existence/uniqueness checks”, “pruning point ancestry checks”) for future maintainers.


54-56: Normalize Display casing to match existing variants.

Existing variants render as lowercase ("body", "header", "uncle", "consensus", "state"). The new ones render with leading capitals, which is inconsistent and may surprise log scrapers.

Apply:

-            Self::Parents => write!(f, "Parents"),
-            Self::PruningPoint => write!(f, "PruningPoint"),
+            Self::Parents => write!(f, "parents"),
+            Self::PruningPoint => write!(f, "pruning_point"),

Optional: derive Copy + Clone for VerifyBlockField since it’s a small enum frequently copied in errors:

#[derive(Debug, Copy, Clone)]
pub enum VerifyBlockField { ... }
account/api/src/message.rs (2)

25-27: Batch signing API: confirm signer resolution semantics.

Single-sign API includes an explicit signer, but SignTxnInBatch relies solely on each RawUserTransaction’s sender. That’s fine, but verify downstream code paths don’t assume/require an explicit signer and that mixed-sender batches are supported.

If you need homogeneous batches per signer, consider:

SignTxnInBatch {
    signer: AccountAddress,
    txns: Vec<RawUserTransaction>,
}

Otherwise, please document that mixed senders are allowed and the signer is taken from txn.sender().


36-36: Batch unlock response shape: ensure callers can access per-account results.

UnlockAccountInBatch returns a vector of (address, password) plus a Duration, but AccountResponse has only UnlockAccountResponse (unit) and AccountList(Vec<AccountInfo)). Confirm the service responds with AccountList for the batch path so clients can know which accounts succeeded/failed.

I can help wire the batch handler to return AccountList and update RPC/client stubs if needed.

chain/open-block/src/lib.rs (1)

38-38: Sharing ChainStateDB via Arc: check concurrent read/write safety.

Switching state to Arc enables aliasing across threads. This OpenedBlock mutates state (apply_write_set/commit). If other components hold a reader obtained via state_reader() concurrently, you may observe inconsistent reads unless ChainStateDB provides internal synchronization and coherent snapshot semantics for readers.

  • If you need concurrent readers during block building, consider exposing a snapshotting reader (e.g., a forked view or RemoteStateReader) instead of the live mutable DB.
  • Alternatively, wrap in Arc<RwLock> and return a read guard for readers. This is heavier, so use only if necessary.
chain/src/verifier/mod.rs (1)

304-308: PruningPoint error labeling is accurate and helpful.

Switching to VerifyBlockField::PruningPoint improves triage for pruning-related ancestry failures. Consider also updating earlier checks in verify_dag (Lines 262 and 269) to use Parents for consistency.

account/src/account_manager.rs (2)

102-119: Consider refactoring to reduce code duplication

The unlock_account_in_batch method duplicates the account loading and caching logic from unlock_account. Both methods share the same pattern of loading accounts, caching passwords, and returning account info.

Consider extracting the common logic:

+    fn unlock_account_internal(
+        &self,
+        address: AccountAddress,
+        password: String,
+        duration: Duration,
+    ) -> AccountResult<AccountInfo> {
+        let account = Account::load(address, Some(password.clone()), self.store.clone())?
+            .ok_or(AccountError::AccountNotExist(address))?;
+        let ttl = std::time::Instant::now().add(duration);
+        self.key_cache
+            .write()
+            .cache_pass(address, password, ttl);
+        Ok(account.info())
+    }
+
     pub fn unlock_account(
         &self,
         address: AccountAddress,
         password: &str,
         duration: Duration,
     ) -> AccountResult<AccountInfo> {
-        let account = Account::load(address, Some(password.to_string()), self.store.clone())?
-            .ok_or(AccountError::AccountNotExist(address))?;
-        let ttl = std::time::Instant::now().add(duration);
-        self.key_cache
-            .write()
-            .cache_pass(address, password.to_string(), ttl);
-        Ok(account.info())
+        self.unlock_account_internal(address, password.to_string(), duration)
     }
 
     pub fn unlock_account_in_batch(
         &self,
         batch: Vec<(AccountAddress, String)>,
         duration: Duration,
     ) -> AccountResult<Vec<AccountInfo>> {
         let mut accounts = vec![];
         for (address, password) in batch.into_iter() {
-            let account = Account::load(address, Some(password.to_string()), self.store.clone())?
-                .ok_or(AccountError::AccountNotExist(address))?;
-            let ttl = std::time::Instant::now().add(duration);
-            self.key_cache
-                .write()
-                .cache_pass(address, password.to_string(), ttl);
-            accounts.push(account.info());
+            accounts.push(self.unlock_account_internal(address, password, duration)?);
         }
         Ok(accounts)
     }

108-109: Redundant string conversion

The password is already a String from the vector iterator, so calling .to_string() creates an unnecessary allocation.

-            let account = Account::load(address, Some(password.to_string()), self.store.clone())?
+            let account = Account::load(address, Some(password.clone()), self.store.clone())?

And on line 114:

-            .cache_pass(address, password.to_string(), ttl);
+            .cache_pass(address, password, ttl);
chain/src/chain.rs (1)

476-477: Simplify block ID extraction

The block ID is captured early but then extracted again from verified_block.block.header().id(). This creates redundancy.

Since block_id is already captured at line 476, reuse it consistently throughout:

-        let block_id = verified_block.block.header().id();
-        info!("execute dag block:{:?}", block_id);
+        let block_id = verified_block.block.header().id();
+        info!("execute dag block:{:?}", block_id);
sync/src/block_connector/execute_service.rs (2)

83-94: Consider making timeout configurable

The hardcoded 30-second timeout (3000 * 10ms) for waiting on parent blocks should be configurable to adapt to different network conditions.

Consider extracting this as a configuration parameter:

+const PARENT_READY_TIMEOUT_MS: u64 = 30000; // 30 seconds
+const PARENT_CHECK_INTERVAL_MS: u64 = 10;

 for parent_id in new_block.header().parents_hash() {
-    let mut count: u64 = 3000;
+    let mut count: u64 = PARENT_READY_TIMEOUT_MS / PARENT_CHECK_INTERVAL_MS;
     while !Self::check_parent_ready(parent_id, storage.clone(), dag.clone())? && count > 0 {
-        async_std::task::sleep(std::time::Duration::from_millis(10)).await;
+        async_std::task::sleep(std::time::Duration::from_millis(PARENT_CHECK_INTERVAL_MS)).await;
         count = count.saturating_sub(1);
         if count == 0 {
             return Err(anyhow::anyhow!(
                 "wait dag block timeout, for block id: {:?}",
                 parent_id
             ));
         }
     }
 }

248-288: Consider adding error metrics for peer block failures

The async task for peer block execution properly handles errors by sending failure info through the channel. Consider adding metrics to track peer block execution failures for monitoring.

Would you like me to suggest a metrics implementation to track execution failures per peer?

account/service/src/service.rs (2)

140-142: Batch signing path added — confirm semantics (all-or-nothing vs partial successes) and add basic telemetry

The implementation is straightforward and correct. Two suggestions:

  • Clarify whether a single locked/missing account should fail the entire batch (current behavior) or return per-item results; if the latter, response type would need redesign across API layers.
  • Add lightweight tracing (batch size, elapsed) to observe usage patterns and latency.

Example minimal instrumentation:

 AccountRequest::SignTxnInBatch { txns } => {
-    AccountResponse::SignedTxnList(self.manager.sign_txn_in_batch(txns)?)
+    let _span = trace_span!("account.sign_txn_in_batch", size = txns.len()).entered();
+    AccountResponse::SignedTxnList(self.manager.sign_txn_in_batch(txns)?)
 }

152-155: Batch unlock path added — confirm default duration semantics and guard against large batches

Looks correct. Two items to verify:

  • Duration defaults: upstream RPC maps None to u32::MAX seconds; ensure that’s intentional and documented to avoid unbounded unlocks.
  • Consider enforcing a max batch size and emitting a warning/metric to protect wallet storage and key-cache from spikes.

If you want, I can propose a limit (e.g., 1024) and return an error when exceeded.

rpc/api/src/account/mod.rs (2)

45-50: Add docstring clarifying signer resolution and output ordering for sign_txn_in_batch

Please document that:

  • Each RawUserTransaction’s sender determines the signer; those accounts must be unlocked.
  • The returned SignedUserTransaction list preserves the input order.
  • Failure semantics (entire batch errors vs partial).

Proposed docs:

 #[rpc(name = "account.sign_txn_in_batch")]
 fn sign_txn_in_batch(
     &self,
     raw_txn: Vec<RawUserTransaction>,
 ) -> FutureResult<Vec<SignedUserTransaction>>;
+/// Notes:
+/// - Signer is inferred from each txn.sender; accounts must be unlocked.
+/// - Output order matches input order.
+/// - Current behavior: any failure errors the whole batch.

60-67: unlock_in_batch: document None-duration behavior and return ordering

The RPC maps None to u32::MAX server-side. Please reflect that here and state that results are ordered to the input and that accounts with wrong passwords cause the entire batch to fail (if that’s the intended behavior).

 /// unlock accounts for duration in seconds, default to u32::max.
 #[rpc(name = "account.unlock_in_batch")]
 fn unlock_in_batch(
     &self,
     batch: Vec<(AccountAddress, String)>,
     duration: Option<u32>,
 ) -> FutureResult<Vec<AccountInfo>>;
+/// Notes:
+/// - If duration is None, the server unlocks accounts for u32::MAX seconds.
+/// - Output order matches input order.
+/// - Current behavior: any failure errors the whole batch.
rpc/api/src/txpool/mod.rs (3)

19-21: New submit_transactions RPC — clarify behavior on partial validation failures

Good addition. Please define whether add_txns(txns) is all-or-nothing or per-tx error reporting. Currently the server collects ids and returns them if add_txns succeeds; if any tx fails, does the entire batch fail? If partial success is possible, consider returning a per-tx result vector.

If you decide to keep all-or-nothing, add docs to make this explicit.


46-53: next_sequence_number_in_batch: specify None semantics and stability of ordering

The double-Option return is subtle. Please document:

  • Outer None means pool read lock unavailable (non-blocking behavior).
  • Inner Option per address is Some(next_seq) if there are pending txns for that sender in txpool, else None.
  • Output order matches input order.

Suggested docs:

 /// Returns next valid sequence number for given sender
 /// or `None` if there are no pending transactions from that sender in txpool.
 #[rpc(name = "txpool.next_sequence_number_in_batch")]
 fn next_sequence_number_in_batch(
     &self,
     addresses: Vec<AccountAddress>,
 ) -> FutureResult<Option<Vec<(AccountAddress, Option<u64>)>>>;
+/// Notes:
+/// - Outer None: txpool lock not acquired; caller may retry.
+/// - Inner None per address: no pending txns for that sender.
+/// - Preserves input order 1:1 with addresses.

54-57: Misplaced/copy-paste doc on txpool.state

The comment “or None if there are no pending transactions…” seems copied from the sequence-number method and doesn’t describe txpool.state. Please fix the doc.

-/// or `None` if there are no pending transactions from that sender in txpool.
 #[rpc(name = "txpool.state")]
 fn state(&self) -> FutureResult<TxPoolStatus>;
txpool/src/pool/queue.rs (1)

468-496: Batch next_sequence_number: good read-lock scoping; minor allocation/clone tweaks

The logic is correct and consistent with the single-address variant. Two micro-improvements:

  • Hoist stale_id and a single client clone outside the loop to avoid repetitive setup.
  • Optional: pre-allocate the Vec using addresses.len().
 pub fn next_sequence_number_in_batch<C: client::AccountSeqNumberClient>(
     &self,
     client: C,
     addresses: Vec<Address>,
 ) -> Option<Vec<(Address, Option<u64>)>> {
-    let pool = match self.pool.try_read() {
+    let pool = match self.pool.try_read() {
         Some(pool) => pool,
         None => return None,
     };
-
-    Some(
-        addresses
-            .iter()
-            .map(|address| {
-                // Also we ignore stale transactions in the queue.
-                let stale_id = None;
-                let state_readiness = ready::State::new(client.clone(), stale_id);
-                (
-                    *address,
-                    pool.pending_from_sender(state_readiness, address)
-                        .last()
-                        .map(|tx| tx.signed().sequence_number().saturating_add(1)),
-                )
-            })
-            .collect::<Vec<_>>(),
-    )
+    // Also we ignore stale transactions in the queue.
+    let stale_id = None;
+    let client = client.clone();
+    let mut out = Vec::with_capacity(addresses.len());
+    for address in addresses.iter() {
+        let state_readiness = ready::State::new(client.clone(), stale_id);
+        let next = pool
+            .pending_from_sender(state_readiness, address)
+            .last()
+            .map(|tx| tx.signed().sequence_number().saturating_add(1));
+        out.push((*address, next));
+    }
+    Some(out)
 }
rpc/server/src/module/account_rpc.rs (2)

164-175: Rename batch parameter to raw_txns for clarity and consistency.

The parameter carries a Vec, but is named in singular form. Align with intent and other batch APIs.

Apply this diff:

-    fn sign_txn_in_batch(
-        &self,
-        raw_txn: Vec<RawUserTransaction>,
-    ) -> FutureResult<Vec<SignedUserTransaction>> {
+    fn sign_txn_in_batch(
+        &self,
+        raw_txns: Vec<RawUserTransaction>,
+    ) -> FutureResult<Vec<SignedUserTransaction>> {
         let service = self.account.clone();
         let fut = async move {
-            let result = service.sign_txn_in_batch(raw_txn).await?;
+            let result = service.sign_txn_in_batch(raw_txns).await?;
             Ok(result)
         }
         .map_err(map_err);
         Box::pin(fut.boxed())
     }

197-213: Confirm default unlock duration of u32::MAX seconds is intentional; consider a config-driven default and max clamp.

Using ~136 years as the default unlock TTL could surprise operators if the caller omits duration. If product intent is “indefinite until restart,” document it; otherwise, consider reading a default TTL from NodeConfig (if available) and enforcing a sane upper bound (e.g., hours/days).

Would you like me to wire this to a config key and add validation?

txpool/api/src/lib.rs (2)

48-53: Clarify state_root-based API semantics in the docs.

The comment still reads generically. Explicitly mention what state_root represents to avoid ambiguity during state-root migration.

Apply this doc tweak:

-    /// alike get_pending_txns, it needs the pool client with the specific account state
+    /// Like `get_pending_txns`, but evaluates readiness against a specific global account state
+    /// identified by `state_root` (the Sparse Merkle Tree root). The pool consults this state
+    /// when determining nonces/balances for transaction readiness.
     fn get_pending_with_state(

59-65: Document outer vs inner Option semantics for batch sequence queries.

Outer None means the pool read lock wasn’t acquired (non-blocking behavior); inner None means “no pending txns for this address.” Callers need this distinction.

Apply this doc tweak:

-    /// Returns next valid sequence number for given sender
-    /// or `None` if there are no pending transactions from that sender.
+    /// Returns next valid sequence numbers in batch.
+    /// - Returns `None` (outer) if the pool read lock isn’t available (non-blocking fallback).
+    /// - Returns `Some(Vec<(address, seq_opt)>)` otherwise, where `seq_opt == None` indicates
+    ///   there are no pending transactions for that address in the pool.
     fn next_sequence_number_in_batch(
txpool/mock-service/src/lib.rs (3)

105-112: Provide non-panicking stubs for get_pending_with_state in the mock.

Returning a sensible fallback keeps tests using the mock from accidentally panicking.

Apply this diff:

     fn get_pending_with_state(
         &self,
         _max_len: u64,
         _current_timestamp_secs: Option<u64>,
         _state_root: HashValue,
     ) -> Vec<SignedUserTransaction> {
-        todo!()
+        // Ignore state_root in the mock; reuse existing behavior.
+        self.get_pending_txns(Some(_max_len), _current_timestamp_secs)
     }

114-121: Return a safe default instead of todo!() for next_sequence_number_with_state.

Keeps the mock usable even if this path is hit.

Apply this diff:

     fn next_sequence_number_with_state(
         &self,
         _address: AccountAddress,
         _state_root: HashValue,
     ) -> Option<u64> {
-        todo!()
+        // Mock: no pending txns tracked by address; return None.
+        None
     }

122-127: Implement a harmless batch default in the mock to avoid runtime panics.

Map all addresses to None to reflect “no pending” semantics.

Apply this diff:

     fn next_sequence_number_in_batch(
         &self,
         _addresses: Vec<AccountAddress>,
     ) -> Option<Vec<(AccountAddress, Option<u64>)>> {
-        todo!()
+        Some(_addresses.into_iter().map(|a| (a, None)).collect())
     }
account/api/src/service.rs (3)

37-41: Name the batch parameter raw_txns and add brief docs on error semantics.

Pluralize the Vec parameter and document that the call fails on the first error (no partial success), matching manager behavior.

Apply this diff:

-    async fn sign_txn_in_batch(
-        &self,
-        raw_txn: Vec<RawUserTransaction>,
-    ) -> Result<Vec<SignedUserTransaction>>;
+    /// Signs a batch of raw user transactions.
+    /// Fails the whole batch on the first error; no partial successes are returned.
+    async fn sign_txn_in_batch(
+        &self,
+        raw_txns: Vec<RawUserTransaction>,
+    ) -> Result<Vec<SignedUserTransaction>>;

173-185: Keep naming consistent, and fix the panic message spelling.

Use raw_txns and correct “Unexpect” → “Unexpected”. Consider returning an error instead of panic to match Result ergonomics, but that aligns with existing style and can be refactored separately.

Apply this diff:

-    async fn sign_txn_in_batch(
-        &self,
-        raw_txn: Vec<RawUserTransaction>,
-    ) -> Result<Vec<SignedUserTransaction>> {
+    async fn sign_txn_in_batch(
+        &self,
+        raw_txns: Vec<RawUserTransaction>,
+    ) -> Result<Vec<SignedUserTransaction>> {
         let response = self
-            .send(AccountRequest::SignTxnInBatch { txns: raw_txn })
+            .send(AccountRequest::SignTxnInBatch { txns: raw_txns })
             .await??;
         if let AccountResponse::SignedTxnList(txns) = response {
             Ok(txns)
         } else {
-            panic!("Unexpect response type.")
+            panic!("Unexpected response type.")
         }
     }

203-216: Add brief docs for unlock_account_in_batch and fix “Unexpect” → “Unexpected”.

Docs clarify batch failure semantics; spelling fix improves consistency.

Apply this diff:

-    async fn unlock_account_in_batch(
+    /// Unlock multiple accounts in one call. Fails the whole batch on the first error.
+    async fn unlock_account_in_batch(
         &self,
         batch: Vec<(AccountAddress, String)>,
         duration: std::time::Duration,
     ) -> Result<Vec<AccountInfo>> {
         let response = self
             .send(AccountRequest::UnlockAccountInBatch(batch, duration))
             .await??;
         if let AccountResponse::AccountList(account_info) = response {
             Ok(account_info)
         } else {
-            panic!("Unexpect response type.")
+            panic!("Unexpected response type.")
         }
     }
cmd/tx-factory/src/main.rs (4)

589-605: LGTM; simplified mapping is fine.

This is a clear and correct simplification. Consider reusing next_sequence_number_in_batch for singletons to remove duplication, but not necessary.


607-651: Minor efficiency/readability tweaks for batch send.

  • Compute expiration_timestamp once per sender (or once globally) rather than per txn inside the inner loop.
  • For very large round_num, assembling all txns into memory may spike RAM. Consider chunked submission.

I can provide a chunked submission variant if you expect large batches.


87-87: Remove stale commented code to reduce noise.

The commented INITIAL_BALANCE path is no longer used; consider deleting to keep the codebase lean.

Also applies to: 490-499


662-689: Stress split: add guardrails for odd/small account counts.

mid = sequences.len() >> 1 with small/odd sizes can produce empty halves or uneven pairing. Add a sanity check and early return/log when mid == 0 or mid == sequences.len().

If desired, I can send a small patch that enforces even account counts or rounds down and logs.

txpool/src/tx_pool_service_impl.rs (2)

184-195: Batch sequence API wiring looks good.

Metrics label reuse is fine; consider a distinct label (e.g., next_sequence_number_in_batch) if you need separate observability.


197-210: Naming mismatch: next_sequence_number_with_header uses a state_root.

The inner method name no longer reflects its parameter. Consider renaming to next_sequence_number_with_state for clarity and to match the trait naming.

Apply this diff if you agree (local rename):

-    pub(crate) fn next_sequence_number_with_header(
+    pub(crate) fn next_sequence_number_with_state(
         &self,
         address: AccountAddress,
         state_root: HashValue,
     ) -> Option<u64> {
@@
-        self.queue.next_sequence_number(pool_client, &address)
+        self.queue.next_sequence_number(pool_client, &address)
     }

And update the caller at Line 208 accordingly.

miner/src/lib.rs (1)

290-293: Nit: reuse the computed block_hash in log.

You already compute block_hash; use it in the log for consistency.

-            info!(target: "miner", "Minted new block: {}", block.id());
+            info!(target: "miner", "Minted new block: {}", block_hash);
miner/src/create_block_template/process_transaction.rs (1)

129-205: Processing loop looks correct; consider removing _untouched_user_txns.

The _untouched_user_txns is computed but unused; consider dropping the variable to reduce noise.

Also, thanks for removing invalid txns from the pool; that keeps the mempool clean.

miner/src/create_block_template/block_builder_service.rs (5)

616-645: Prevent duplicates and cap the batch size in fetch_transactions to avoid wasted pre-exec work.

  • Duplicates: Appending uncle (blue) block txns may duplicate txns already returned by the txpool or across uncles. This wastes VM cycles during pre-execution.
  • Cap: After merging and sorting you may exceed max_txns. Truncate to respect the cap and limit pre-exec load.

Apply:

- use std::collections::VecDeque;
+ use std::collections::{HashSet, VecDeque};
@@
-        blue_blocks.iter().for_each(|block| {
-            block.transactions().iter().for_each(|transaction| {
-                pending_transactions.push(transaction.clone());
-            })
-        });
+        // Dedup by txn id while merging uncle transactions
+        let mut seen: HashSet<HashValue> =
+            pending_transactions.iter().map(|t| t.id()).collect();
+        for block in blue_blocks.iter() {
+            for txn in block.transactions().iter() {
+                if seen.insert(txn.id()) {
+                    pending_transactions.push(txn.clone());
+                }
+            }
+        }
@@
-        Ok(pending_transactions)
+        // Respect the requested cap to avoid unnecessary pre-execution
+        if pending_transactions.len() > max_txns as usize {
+            pending_transactions.truncate(max_txns as usize);
+        }
+        Ok(pending_transactions)

Also applies to: 1-1


531-559: Don’t expect in the background pre-exec task; convert to error logging and early return.

A panic here crashes the miner’s background worker, which is avoidable and makes diagnosing transient txpool/state issues harder. Prefer logging + early return on errors from ProcessTransactionData::new(), .process(), and sender.send().

-        async_std::task::spawn(async move {
-            let process_trans = ProcessTransactionData::new(
+        async_std::task::spawn(async move {
+            let process_trans = match ProcessTransactionData::new(
                 storage,
                 selected_header,
                 Arc::new(main.into_statedb()),
                 txn_provider,
                 txn,
                 block_gas_limit,
                 0,
                 block_meta.clone(),
                 vm_metrics,
-            )
-            .expect("failed to init process transaction");
+            ) {
+                Ok(v) => v,
+                Err(e) => {
+                    error!("failed to init process transaction: {:?}", e);
+                    return;
+                }
+            };
 
-            let result = process_trans
-                .process()
-                .expect("failed to process transaction");
+            let result = match process_trans.process() {
+                Ok(r) => r,
+                Err(e) => {
+                    error!("failed to process transaction: {:?}", e);
+                    return;
+                }
+            };
 
-            sender
-                .send(ProcessHeaderTemplate {
+            if let Err(e) = sender.send(ProcessHeaderTemplate {
                     header: previous_header,
                     uncles,
                     difficulty,
                     strategy,
                     transaction_outputs: result,
                     block_metadata: block_meta,
                     pruning_point,
-                })
-                .expect("failed to send result");
+                }) {
+                error!("failed to send ProcessHeaderTemplate: {:?}", e);
+            }
         });

153-201: Bound concurrency/backpressure to prevent runaway spawning under bursty request rates.

Each BlockTemplateRequest spawns a pre-exec task, and the channel is unbounded. Under bursts (e.g., fast headers or retries), tasks and queued results can grow without bound.

Options:

  • Use a bounded crossbeam::channel::bounded(N) and try_send to drop or coalesce when full.
  • Track in-flight pre-exec tasks with an AtomicUsize and skip spawning if above a threshold (e.g., 16; aligns with prior MinerService task-pool learning).
  • Debounce BlockTemplateRequest events or coalesce pending requests.

I can prototype a bounded-channel variant if you confirm desired semantics (drop, latest-wins, or backpressure).

Also applies to: 530-560


153-201: Minor: avoid repeated get_shared in the hot loop (micro).

Inside the 10ms interval closure, ctx.get_shared::<Arc<Storage>>() is called every tick. It’s cheap (Arc clone), but you can read it once into a local (captured by the closure) by retrieving it just before run_interval and moving it in. Not a blocker.


564-608: Large commented-out legacy path — consider removing or gating with a feature flag.

The old OpenedBlock path is kept commented. If it’s no longer needed, delete it to reduce noise; if it’s for fallback, consider #[cfg(feature = "legacy-template")] to keep it compile-safe.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 4189c39 and 8e1d951.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (29)
  • account/api/src/message.rs (3 hunks)
  • account/api/src/service.rs (3 hunks)
  • account/service/src/service.rs (2 hunks)
  • account/src/account_manager.rs (2 hunks)
  • block-relayer/src/block_relayer.rs (1 hunks)
  • chain/api/src/errors.rs (2 hunks)
  • chain/open-block/src/lib.rs (3 hunks)
  • chain/src/chain.rs (4 hunks)
  • chain/src/verifier/mod.rs (2 hunks)
  • cmd/tx-factory/src/main.rs (10 hunks)
  • config/src/genesis_config.rs (1 hunks)
  • miner/Cargo.toml (1 hunks)
  • miner/src/create_block_template/block_builder_service.rs (11 hunks)
  • miner/src/create_block_template/mod.rs (1 hunks)
  • miner/src/create_block_template/process_transaction.rs (1 hunks)
  • miner/src/lib.rs (5 hunks)
  • miner/tests/miner_test.rs (1 hunks)
  • rpc/api/src/account/mod.rs (2 hunks)
  • rpc/api/src/txpool/mod.rs (2 hunks)
  • rpc/client/src/lib.rs (3 hunks)
  • rpc/server/src/module/account_rpc.rs (2 hunks)
  • rpc/server/src/module/txpool_rpc.rs (2 hunks)
  • sync/src/block_connector/execute_service.rs (6 hunks)
  • txpool/api/src/lib.rs (1 hunks)
  • txpool/mock-service/src/lib.rs (1 hunks)
  • txpool/src/pool/queue.rs (1 hunks)
  • txpool/src/pool_client.rs (3 hunks)
  • txpool/src/tx_pool_service_impl.rs (5 hunks)
  • vm/vm-runtime/src/starcoin_vm.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-03T03:25:16.732Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4572
File: miner/src/create_block_template/new_header_service.rs:0-0
Timestamp: 2025-07-03T03:25:16.732Z
Learning: In Starcoin's miner/src/create_block_template/new_header_service.rs, panic! is intentionally used in impossible code branches (like equal block ID comparison after early equality check) to detect logical errors early and ensure immediate restart rather than allowing potentially corrupted state to continue.

Applied to files:

  • miner/tests/miner_test.rs
  • miner/src/lib.rs
  • miner/src/create_block_template/block_builder_service.rs
📚 Learning: 2025-08-08T10:20:45.797Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: chain/src/chain.rs:1104-1106
Timestamp: 2025-08-08T10:20:45.797Z
Learning: In starcoin/chain/src/chain.rs, ChainReader::get_block(hash) is intentionally implemented to return any block from storage without verifying membership in the current main chain/DAG view (the previous exist_block_filter was removed). Callers that require main-chain-only results should perform their own existence/membership checks (e.g., exist_block/check_exist_block) as needed.

Applied to files:

  • chain/src/verifier/mod.rs
  • sync/src/block_connector/execute_service.rs
  • chain/src/chain.rs
📚 Learning: 2025-08-08T10:27:43.881Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: txpool/src/pool/queue.rs:389-392
Timestamp: 2025-08-08T10:27:43.881Z
Learning: In starcoinorg/starcoin#4605, txpool/src/pool/queue.rs: For PendingOrdering::Priority, if pool.try_read() fails, the design is to return an empty Vec and proceed with block building without waiting for the lock (non-blocking behavior is intentional).

Applied to files:

  • txpool/src/pool/queue.rs
  • txpool/api/src/lib.rs
  • txpool/src/pool_client.rs
  • txpool/mock-service/src/lib.rs
  • miner/src/create_block_template/block_builder_service.rs
📚 Learning: 2025-08-08T10:25:49.039Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: txpool/mock-service/src/lib.rs:114-120
Timestamp: 2025-08-08T10:25:49.039Z
Learning: In PR starcoinorg/starcoin#4605, txpool/mock-service/src/lib.rs: MockTxPoolService::next_sequence_number_with_header is currently unused; keeping todo!() in the mock is acceptable and won’t affect runtime unless invoked.

Applied to files:

  • txpool/src/pool/queue.rs
  • rpc/server/src/module/txpool_rpc.rs
  • txpool/api/src/lib.rs
  • txpool/src/pool_client.rs
  • txpool/mock-service/src/lib.rs
  • txpool/src/tx_pool_service_impl.rs
📚 Learning: 2025-08-08T10:16:46.394Z
Learnt from: jackzhhuang
PR: starcoinorg/starcoin#4605
File: chain/src/chain.rs:185-187
Timestamp: 2025-08-08T10:16:46.394Z
Learning: In starcoin/chain/src/chain.rs, BlockChain::statedb currently consumes self to move out ChainStateDB during block building to avoid cloning or recreating the state DB for transaction filtering. This is intentional for performance; a rename to into_statedb is acceptable but behavior should remain a consuming getter.

Applied to files:

  • txpool/src/pool_client.rs
  • chain/src/chain.rs
  • chain/open-block/src/lib.rs
📚 Learning: 2025-02-10T10:00:57.422Z
Learnt from: sanlee42
PR: starcoinorg/starcoin#4394
File: miner/src/lib.rs:259-263
Timestamp: 2025-02-10T10:00:57.422Z
Learning: The MinerService's task_pool is limited to 16 elements (DEFAULT_TASK_POOL_SIZE), making Vec a suitable data structure despite O(n) front removal operations.

Applied to files:

  • miner/src/create_block_template/block_builder_service.rs
🧬 Code graph analysis (22)
rpc/api/src/account/mod.rs (3)
account/api/src/service.rs (2)
  • sign_txn_in_batch (37-40)
  • sign_txn_in_batch (173-185)
account/src/account_manager.rs (1)
  • sign_txn_in_batch (276-298)
rpc/server/src/module/account_rpc.rs (2)
  • sign_txn_in_batch (164-175)
  • unlock_in_batch (197-213)
rpc/client/src/lib.rs (6)
cmd/tx-factory/src/main.rs (1)
  • next_sequence_number_in_batch (562-587)
txpool/src/pool/queue.rs (2)
  • next_sequence_number_in_batch (470-496)
  • addresses (481-494)
rpc/api/src/txpool/mod.rs (3)
  • next_sequence_number_in_batch (49-52)
  • submit_transaction (17-17)
  • submit_transactions (20-20)
rpc/server/src/module/txpool_rpc.rs (3)
  • next_sequence_number_in_batch (112-118)
  • submit_transaction (34-44)
  • submit_transactions (46-59)
txpool/api/src/lib.rs (1)
  • next_sequence_number_in_batch (61-64)
txpool/src/tx_pool_service_impl.rs (2)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
account/src/account_manager.rs (3)
account/api/src/service.rs (4)
  • unlock_account_in_batch (47-51)
  • unlock_account_in_batch (203-216)
  • sign_txn_in_batch (37-40)
  • sign_txn_in_batch (173-185)
account/src/account.rs (2)
  • address (154-156)
  • load (66-103)
rpc/server/src/module/account_rpc.rs (1)
  • sign_txn_in_batch (164-175)
cmd/tx-factory/src/main.rs (5)
txpool/src/pool/queue.rs (4)
  • next_sequence_number_in_batch (470-496)
  • addresses (481-494)
  • new (176-181)
  • new (230-250)
rpc/api/src/txpool/mod.rs (1)
  • next_sequence_number_in_batch (49-52)
rpc/client/src/lib.rs (5)
  • next_sequence_number_in_batch (300-306)
  • state_reader (534-539)
  • new (107-112)
  • new (138-174)
  • new (1220-1236)
rpc/server/src/module/txpool_rpc.rs (1)
  • next_sequence_number_in_batch (112-118)
txpool/src/tx_pool_service_impl.rs (2)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
vm/vm-runtime/src/starcoin_vm.rs (2)
vm/vm-runtime/src/lib.rs (1)
  • execute_block (37-42)
vm/vm-runtime/src/parallel_executor/mod.rs (1)
  • execute_block (67-111)
chain/src/verifier/mod.rs (2)
chain/src/chain.rs (5)
  • has_dag_block (1365-1376)
  • e (173-173)
  • e (688-688)
  • e (1600-1600)
  • is_dag_ancestor_of (1394-1396)
types/src/block/mod.rs (1)
  • pruning_point (341-343)
txpool/src/pool/queue.rs (7)
cmd/tx-factory/src/main.rs (2)
  • next_sequence_number_in_batch (562-587)
  • new (225-258)
rpc/api/src/txpool/mod.rs (1)
  • next_sequence_number_in_batch (49-52)
rpc/client/src/lib.rs (4)
  • next_sequence_number_in_batch (300-306)
  • new (107-112)
  • new (138-174)
  • new (1220-1236)
rpc/server/src/module/txpool_rpc.rs (1)
  • next_sequence_number_in_batch (112-118)
txpool/api/src/lib.rs (2)
  • next_sequence_number_in_batch (61-64)
  • new (100-102)
txpool/mock-service/src/lib.rs (3)
  • next_sequence_number_in_batch (122-127)
  • pool (60-67)
  • new (22-24)
txpool/src/tx_pool_service_impl.rs (3)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
  • new (37-74)
rpc/server/src/module/txpool_rpc.rs (6)
rpc/api/src/txpool/mod.rs (2)
  • submit_transactions (20-20)
  • next_sequence_number_in_batch (49-52)
rpc/client/src/lib.rs (2)
  • submit_transactions (313-319)
  • next_sequence_number_in_batch (300-306)
rpc/server/src/module/mod.rs (1)
  • convert_to_rpc_error (64-67)
txpool/src/pool/queue.rs (3)
  • ready (396-412)
  • next_sequence_number_in_batch (470-496)
  • addresses (481-494)
txpool/api/src/lib.rs (1)
  • next_sequence_number_in_batch (61-64)
txpool/src/tx_pool_service_impl.rs (2)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
txpool/api/src/lib.rs (7)
txpool/mock-service/src/lib.rs (4)
  • get_pending_with_state (105-112)
  • next_sequence_number (74-76)
  • next_sequence_number_in_batch (122-127)
  • next_sequence_number_with_state (114-120)
txpool/src/tx_pool_service_impl.rs (6)
  • get_pending_with_state (145-170)
  • next_sequence_number (174-182)
  • next_sequence_number (380-383)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
  • next_sequence_number_with_state (197-210)
txpool/src/pool/queue.rs (3)
  • next_sequence_number (448-466)
  • next_sequence_number_in_batch (470-496)
  • addresses (481-494)
rpc/api/src/txpool/mod.rs (2)
  • next_sequence_number (44-44)
  • next_sequence_number_in_batch (49-52)
rpc/server/src/module/txpool_rpc.rs (2)
  • next_sequence_number (107-110)
  • next_sequence_number_in_batch (112-118)
cmd/tx-factory/src/main.rs (1)
  • next_sequence_number_in_batch (562-587)
rpc/client/src/lib.rs (1)
  • next_sequence_number_in_batch (300-306)
txpool/src/pool_client.rs (2)
chain/open-block/src/lib.rs (2)
  • state_root (131-133)
  • new (54-112)
txpool/src/tx_pool_service_impl.rs (1)
  • new (37-74)
txpool/mock-service/src/lib.rs (6)
txpool/api/src/lib.rs (3)
  • get_pending_with_state (48-53)
  • next_sequence_number_with_state (67-71)
  • next_sequence_number_in_batch (61-64)
txpool/src/tx_pool_service_impl.rs (4)
  • get_pending_with_state (145-170)
  • next_sequence_number_with_state (197-210)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
cmd/tx-factory/src/main.rs (1)
  • next_sequence_number_in_batch (562-587)
txpool/src/pool/queue.rs (1)
  • next_sequence_number_in_batch (470-496)
rpc/client/src/lib.rs (1)
  • next_sequence_number_in_batch (300-306)
rpc/server/src/module/txpool_rpc.rs (1)
  • next_sequence_number_in_batch (112-118)
account/api/src/service.rs (3)
account/src/account_manager.rs (4)
  • sign_txn_in_batch (276-298)
  • unlock_account (87-100)
  • unlock_account_in_batch (102-119)
  • account_info (223-238)
rpc/api/src/account/mod.rs (1)
  • sign_txn_in_batch (46-49)
rpc/server/src/module/account_rpc.rs (1)
  • sign_txn_in_batch (164-175)
rpc/server/src/module/account_rpc.rs (4)
account/api/src/service.rs (2)
  • sign_txn_in_batch (37-40)
  • sign_txn_in_batch (173-185)
account/src/account_manager.rs (1)
  • sign_txn_in_batch (276-298)
rpc/api/src/account/mod.rs (2)
  • sign_txn_in_batch (46-49)
  • unlock_in_batch (62-66)
rpc/client/src/lib.rs (1)
  • map_err (1239-1241)
rpc/api/src/txpool/mod.rs (7)
rpc/client/src/lib.rs (2)
  • submit_transactions (313-319)
  • next_sequence_number_in_batch (300-306)
rpc/server/src/module/txpool_rpc.rs (2)
  • submit_transactions (46-59)
  • next_sequence_number_in_batch (112-118)
cmd/tx-factory/src/main.rs (1)
  • next_sequence_number_in_batch (562-587)
txpool/src/pool/queue.rs (2)
  • next_sequence_number_in_batch (470-496)
  • addresses (481-494)
txpool/api/src/lib.rs (1)
  • next_sequence_number_in_batch (61-64)
txpool/mock-service/src/lib.rs (1)
  • next_sequence_number_in_batch (122-127)
txpool/src/tx_pool_service_impl.rs (2)
  • next_sequence_number_in_batch (184-195)
  • next_sequence_number_in_batch (385-391)
account/service/src/service.rs (2)
block-relayer/src/block_relayer.rs (1)
  • txns (164-164)
rpc/api/src/types.rs (1)
  • txns (808-810)
miner/src/create_block_template/process_transaction.rs (2)
executor/src/executor.rs (1)
  • execute_transactions (13-19)
miner/src/create_block_template/block_builder_service.rs (1)
  • new (289-318)
sync/src/block_connector/execute_service.rs (3)
chain/src/chain.rs (10)
  • info (1031-1037)
  • new (77-88)
  • time_service (196-198)
  • time_service (1192-1194)
  • dag (200-202)
  • execute (1244-1246)
  • e (173-173)
  • e (688-688)
  • e (1600-1600)
  • get_block (1110-1112)
miner/src/create_block_template/block_builder_service.rs (17)
  • crossbeam (303-303)
  • new (289-318)
  • dag (791-799)
  • ctx (95-95)
  • ctx (112-112)
  • ctx (117-117)
  • ctx (118-118)
  • ctx (119-119)
  • ctx (125-125)
  • ctx (138-138)
  • ctx (150-150)
  • ctx (151-151)
  • ctx (153-153)
  • ctx (160-161)
  • ctx (163-163)
  • ctx (206-206)
  • ctx (207-207)
sync/src/block_connector/write_block_chain.rs (1)
  • execute (422-432)
chain/src/chain.rs (4)
types/src/block/mod.rs (5)
  • block_id (1050-1052)
  • state_root (365-367)
  • header (845-847)
  • header (1222-1224)
  • header (1238-1240)
executor/src/block_executor.rs (2)
  • statedb (132-133)
  • block_execute (64-126)
state/api/src/chain_state.rs (1)
  • state_root (128-128)
state/statedb/src/lib.rs (1)
  • state_root (484-486)
chain/open-block/src/lib.rs (3)
chain/src/chain.rs (2)
  • new (77-88)
  • chain_state (1795-1797)
chain/api/src/chain.rs (1)
  • chain_state (138-138)
rpc/client/src/lib.rs (1)
  • state_reader (534-539)
miner/src/lib.rs (1)
miner/src/create_block_template/block_builder_service.rs (12)
  • create_block_template (417-608)
  • stopped (205-209)
  • ctx (95-95)
  • ctx (112-112)
  • ctx (117-117)
  • ctx (118-118)
  • ctx (119-119)
  • ctx (125-125)
  • ctx (138-138)
  • ctx (150-150)
  • handle_event (213-222)
  • handle_event (226-242)
miner/src/create_block_template/block_builder_service.rs (4)
types/src/block/mod.rs (7)
  • block_info (1218-1220)
  • block (1214-1216)
  • state_root (365-367)
  • uncles (852-857)
  • uncles (1234-1236)
  • timestamp (345-347)
  • author (353-355)
sync/src/block_connector/execute_service.rs (13)
  • ctx (167-167)
  • ctx (169-169)
  • ctx (170-170)
  • ctx (178-178)
  • ctx (179-179)
  • ctx (181-181)
  • ctx (188-189)
  • ctx (235-235)
  • ctx (236-236)
  • crossbeam (43-43)
  • new (42-51)
  • handle_event (242-289)
  • handle_event (293-321)
miner/src/create_block_template/process_transaction.rs (1)
  • new (59-89)
vm/types/src/block_metadata/mod.rs (3)
  • timestamp (137-139)
  • new_with_parents (76-103)
  • author (154-156)
txpool/src/tx_pool_service_impl.rs (7)
txpool/api/src/lib.rs (4)
  • get_pending_with_state (48-53)
  • next_sequence_number_in_batch (61-64)
  • next_sequence_number_with_state (67-71)
  • new (100-102)
txpool/mock-service/src/lib.rs (4)
  • get_pending_with_state (105-112)
  • next_sequence_number_in_batch (122-127)
  • next_sequence_number_with_state (114-120)
  • new (22-24)
chain/open-block/src/lib.rs (1)
  • state_root (131-133)
cmd/tx-factory/src/main.rs (1)
  • next_sequence_number_in_batch (562-587)
txpool/src/pool/queue.rs (2)
  • next_sequence_number_in_batch (470-496)
  • addresses (481-494)
rpc/client/src/lib.rs (1)
  • next_sequence_number_in_batch (300-306)
rpc/server/src/module/txpool_rpc.rs (1)
  • next_sequence_number_in_batch (112-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: run benchmark

Comment on lines +276 to +298
pub fn sign_txn_in_batch(
&self,
raw_txns: Vec<RawUserTransaction>,
) -> AccountResult<Vec<SignedUserTransaction>> {
let mut signed_transactions = vec![];
for raw_txn in raw_txns {
let signer_address = raw_txn.sender(); // TODO: check if the signer is the same as the one in the txi
let pass = self.key_cache.write().get_pass(&signer_address);
match pass {
None => return Err(AccountError::AccountLocked(signer_address)),
Some(p) => {
let account = Account::load(signer_address, Some(p), self.store.clone())?
.ok_or(AccountError::AccountNotExist(signer_address))?;
signed_transactions.push(
account
.sign_txn(raw_txn)
.map_err(AccountError::TransactionSignError)?,
);
}
}
}
Ok(signed_transactions)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Early return on first error will leave partial state

The sign_txn_in_batch method returns immediately when encountering the first locked account, which could leave the caller with an incomplete understanding of which accounts are locked.

The current implementation will fail-fast on the first locked account, potentially masking other locked accounts in the batch. Consider either:

  1. Collecting all errors and returning a comprehensive error result
  2. Processing all unlocked accounts and returning partial results with error indicators

Also, the TODO comment on line 282 needs clarification - what specific signer check is needed?

     pub fn sign_txn_in_batch(
         &self,
         raw_txns: Vec<RawUserTransaction>,
     ) -> AccountResult<Vec<SignedUserTransaction>> {
         let mut signed_transactions = vec![];
+        let mut errors = vec![];
         for raw_txn in raw_txns {
-            let signer_address = raw_txn.sender(); // TODO: check if the signer is the same as the one in the txi
+            let signer_address = raw_txn.sender();
             let pass = self.key_cache.write().get_pass(&signer_address);
             match pass {
-                None => return Err(AccountError::AccountLocked(signer_address)),
+                None => errors.push((signer_address, AccountError::AccountLocked(signer_address))),
                 Some(p) => {
                     let account = Account::load(signer_address, Some(p), self.store.clone())?
                         .ok_or(AccountError::AccountNotExist(signer_address))?;
                     signed_transactions.push(
                         account
                             .sign_txn(raw_txn)
                             .map_err(AccountError::TransactionSignError)?,
                     );
                 }
             }
         }
+        if !errors.is_empty() {
+            // Return a batch error with all locked accounts
+            return Err(AccountError::BatchOperationFailed(errors));
+        }
         Ok(signed_transactions)
     }

Note: This would require adding a new BatchOperationFailed variant to the AccountError enum.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In account/src/account_manager.rs around lines 276-298, the method currently
returns early on the first locked account which yields partial, opaque results;
change it to process all transactions collecting per-transaction successes and
errors and return a single failure variant that contains all errors and any
partial signed transactions. Specifically: add an
AccountError::BatchOperationFailed(Vec<(AccountAddress, AccountError)>,
Vec<SignedUserTransaction>) variant, iterate all raw_txns without early returns,
for each txn attempt to get the pass and load/sign the account—on failure push
(signer_address, error) into an errors vec, on success push the signed txn into
a results vec; at the end, return Ok(results) if errors.is_empty() else
Err(AccountError::BatchOperationFailed(errors, results)). Also address the TODO
by explicitly validating that the signer_address equals the transaction
sender/authenticator field (e.g., check raw_txn.sender() vs the signer in the tx
info) and treat mismatches as an error recorded in the errors vec.

Comment on lines +145 to 147
pub fn state_reader(&self) -> Arc<impl ChainStateReader> {
self.state.clone()
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

This signature will not compile: cannot return Arc.

Rust doesn’t support nesting impl Trait inside other types. Use a trait object instead.

Apply:

-    pub fn state_reader(&self) -> Arc<impl ChainStateReader> {
-        self.state.clone()
-    }
+    pub fn state_reader(&self) -> Arc<dyn ChainStateReader + Send + Sync> {
+        self.state.clone()
+    }

Note: ChainStateDB must implement ChainStateReader + Send + Sync + 'static for this to coerce. This keeps the field concrete (Arc) while exposing a read-only trait object.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn state_reader(&self) -> Arc<impl ChainStateReader> {
self.state.clone()
}
pub fn state_reader(&self) -> Arc<dyn ChainStateReader + Send + Sync> {
self.state.clone()
}

Comment on lines 263 to 273
let now = SystemTime::now();
now.duration_since(UNIX_EPOCH)
.expect("time error")
.as_secs()
+ DEFAULT_EXPIRATION_TIME
// let node_info = self
// .client
// .node_info()
// .expect("node_info() should not failed");
// node_info.now_seconds + DEFAULT_EXPIRATION_TIME
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use chain/node time for expiration; fall back to system clock.

Relying solely on SystemTime can drift from the node’s canonical time and cause premature/expired txns. Prefer node_info().now_seconds, with a system-time fallback on error.

Apply this diff to make expiration robust:

-    fn fetch_expiration_time(&self) -> u64 {
-        let now = SystemTime::now();
-        now.duration_since(UNIX_EPOCH)
-            .expect("time error")
-            .as_secs()
-            + DEFAULT_EXPIRATION_TIME
-        // let node_info = self
-        //     .client
-        //     .node_info()
-        //     .expect("node_info() should not failed");
-        // node_info.now_seconds + DEFAULT_EXPIRATION_TIME
-    }
+    fn fetch_expiration_time(&self) -> u64 {
+        match self.client.node_info() {
+            Ok(info) => info.now_seconds + DEFAULT_EXPIRATION_TIME,
+            Err(_) => {
+                let now = SystemTime::now()
+                    .duration_since(UNIX_EPOCH)
+                    .expect("time error")
+                    .as_secs();
+                now + DEFAULT_EXPIRATION_TIME
+            }
+        }
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let now = SystemTime::now();
now.duration_since(UNIX_EPOCH)
.expect("time error")
.as_secs()
+ DEFAULT_EXPIRATION_TIME
// let node_info = self
// .client
// .node_info()
// .expect("node_info() should not failed");
// node_info.now_seconds + DEFAULT_EXPIRATION_TIME
}
fn fetch_expiration_time(&self) -> u64 {
match self.client.node_info() {
Ok(info) => info.now_seconds + DEFAULT_EXPIRATION_TIME,
Err(_) => {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("time error")
.as_secs();
now + DEFAULT_EXPIRATION_TIME
}
}
}
🤖 Prompt for AI Agents
In cmd/tx-factory/src/main.rs around lines 263 to 273, replace the current
unconditional SystemTime-based expiration with logic that first attempts to read
the node time via self.client.node_info().now_seconds and uses that +
DEFAULT_EXPIRATION_TIME; if node_info() fails, fall back to computing the
expiration from SystemTime::now().duration_since(UNIX_EPOCH).expect("time
error").as_secs() + DEFAULT_EXPIRATION_TIME. Make the node_info call
non-panicking (handle Result), propagate or log the error if it occurs, and
ensure the returned expiration is a u64 consistent with the rest of the
function.

Comment on lines +363 to +393
fn submit_transaction_in_batch(
&self,
txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>,
blocking: bool,
) -> Result<()> {
info!("going to unlock accounts");
self.client.account_unlock_in_batch(
txns.iter()
.map(|(sender, _)| (*sender, self.account_password.clone()))
.collect(),
self.unlock_duration,
)?;

let signed_transactions = self.client.account_sign_txn_in_batch(
txns.iter()
.flat_map(|(_, raw_txns)| raw_txns.clone())
.collect(),
)?;

let hashes = self.client.submit_transactions(signed_transactions)?;
info!("submitted {} txns", hashes.len());

if blocking {
for hash in hashes {
self.client
.watch_txn(hash, Some(Duration::from_secs(self.watch_timeout as u64)))?;
}
}
Ok(())
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Batch flow: validate unlock/sign results and preserve 1:1 counts.

  • Check that unlock_in_batch returns one result per sender entry; fail fast if not.
  • Verify sign_txn_in_batch returns exactly the number of raw txns submitted; otherwise some txns weren’t signed.
  • Optional: dedup senders in unlock batch to avoid redundant unlocks if the caller accidentally repeats senders.

Apply this diff to add basic count validation:

     fn submit_transaction_in_batch(
         &self,
         txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>,
         blocking: bool,
     ) -> Result<()> {
         info!("going to unlock accounts");
-        self.client.account_unlock_in_batch(
+        let unlocked = self.client.account_unlock_in_batch(
             txns.iter()
                 .map(|(sender, _)| (*sender, self.account_password.clone()))
                 .collect(),
             self.unlock_duration,
         )?;
+        if unlocked.len() != txns.len() {
+            warn!(
+                "unlock_in_batch returned {} accounts, expected {}",
+                unlocked.len(),
+                txns.len()
+            );
+        }
 
-        let signed_transactions = self.client.account_sign_txn_in_batch(
+        let signed_transactions = self.client.account_sign_txn_in_batch(
             txns.iter()
                 .flat_map(|(_, raw_txns)| raw_txns.clone())
                 .collect(),
         )?;
+        let expected = txns.iter().map(|(_, raw_txns)| raw_txns.len()).sum::<usize>();
+        if signed_transactions.len() != expected {
+            bail!(
+                "signed_txn count mismatch: expected {}, got {}",
+                expected,
+                signed_transactions.len()
+            );
+        }

If you want sender dedup as a safety net, I can provide a patch using a HashSet.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn submit_transaction_in_batch(
&self,
txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>,
blocking: bool,
) -> Result<()> {
info!("going to unlock accounts");
self.client.account_unlock_in_batch(
txns.iter()
.map(|(sender, _)| (*sender, self.account_password.clone()))
.collect(),
self.unlock_duration,
)?;
let signed_transactions = self.client.account_sign_txn_in_batch(
txns.iter()
.flat_map(|(_, raw_txns)| raw_txns.clone())
.collect(),
)?;
let hashes = self.client.submit_transactions(signed_transactions)?;
info!("submitted {} txns", hashes.len());
if blocking {
for hash in hashes {
self.client
.watch_txn(hash, Some(Duration::from_secs(self.watch_timeout as u64)))?;
}
}
Ok(())
}
fn submit_transaction_in_batch(
&self,
txns: Vec<(AccountAddress, Vec<RawUserTransaction>)>,
blocking: bool,
) -> Result<()> {
info!("going to unlock accounts");
let unlocked = self.client.account_unlock_in_batch(
txns.iter()
.map(|(sender, _)| (*sender, self.account_password.clone()))
.collect(),
self.unlock_duration,
)?;
if unlocked.len() != txns.len() {
warn!(
"unlock_in_batch returned {} accounts, expected {}",
unlocked.len(),
txns.len()
);
}
let signed_transactions = self.client.account_sign_txn_in_batch(
txns.iter()
.flat_map(|(_, raw_txns)| raw_txns.clone())
.collect(),
)?;
let expected = txns.iter().map(|(_, raw_txns)| raw_txns.len()).sum::<usize>();
if signed_transactions.len() != expected {
bail!(
"signed_txn count mismatch: expected {}, got {}",
expected,
signed_transactions.len()
);
}
let hashes = self.client.submit_transactions(signed_transactions)?;
info!("submitted {} txns", hashes.len());
if blocking {
for hash in hashes {
self.client
.watch_txn(hash, Some(Duration::from_secs(self.watch_timeout as u64)))?;
}
}
Ok(())
}
🤖 Prompt for AI Agents
In cmd/tx-factory/src/main.rs around lines 363 to 393, the batch flow assumes
unlock and sign RPCs succeed for every input but does not validate result
counts; update the function to (1) validate account_unlock_in_batch returns
exactly one result per provided sender (fail fast with a clear Err if counts
differ), (2) verify account_sign_txn_in_batch returns the same number of signed
transactions as the number of raw txns passed in (fail fast if counts differ),
and (3) preserve the 1:1 mapping between input raw transactions and returned
signed transactions before calling submit_transactions; optionally deduplicate
senders before calling account_unlock_in_batch (using a HashSet) to avoid
redundant unlocks if callers repeat senders.

Comment on lines +562 to +587
fn next_sequence_number_in_batch(
&self,
addresses: Vec<AccountAddress>,
) -> Result<Vec<(AccountAddress, Option<u64>)>> {
let seq_numbers = self
.client
.next_sequence_number_in_batch(addresses)?
.ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
Ok(seq_numbers
.into_iter()
.map(|(address, seq_number)| match seq_number {
Some(seq_number) => (address, Some(seq_number)),
None => {
let state_reader = self
.client
.state_reader(StateRootOption::Latest)
.expect("state_reader error");
let account_resource = state_reader
.get_account_resource(address)
.expect("get_account_resource error");
let seq = account_resource.map(|resource| resource.sequence_number());
(address, seq)
}
})
.collect())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid panics in batch sequence fallback; propagate errors instead.

Using expect within the iterator will crash the process on transient RPC/state errors. Refactor to return a Result with proper error context.

Apply this diff to make the function fallible without panicking:

-    fn next_sequence_number_in_batch(
-        &self,
-        addresses: Vec<AccountAddress>,
-    ) -> Result<Vec<(AccountAddress, Option<u64>)>> {
-        let seq_numbers = self
-            .client
-            .next_sequence_number_in_batch(addresses)?
-            .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
-        Ok(seq_numbers
-            .into_iter()
-            .map(|(address, seq_number)| match seq_number {
-                Some(seq_number) => (address, Some(seq_number)),
-                None => {
-                    let state_reader = self
-                        .client
-                        .state_reader(StateRootOption::Latest)
-                        .expect("state_reader error");
-                    let account_resource = state_reader
-                        .get_account_resource(address)
-                        .expect("get_account_resource error");
-                    let seq = account_resource.map(|resource| resource.sequence_number());
-                    (address, seq)
-                }
-            })
-            .collect())
-    }
+    fn next_sequence_number_in_batch(
+        &self,
+        addresses: Vec<AccountAddress>,
+    ) -> Result<Vec<(AccountAddress, Option<u64>)>> {
+        let seq_numbers = self
+            .client
+            .next_sequence_number_in_batch(addresses.clone())?
+            .ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
+        let state_reader = self.client.state_reader(StateRootOption::Latest)?;
+        let mut out = Vec::with_capacity(seq_numbers.len());
+        for (address, seq_number) in seq_numbers {
+            if let Some(seq) = seq_number {
+                out.push((address, Some(seq)));
+            } else {
+                let seq = state_reader
+                    .get_account_resource(address)?
+                    .map(|resource| resource.sequence_number());
+                out.push((address, seq));
+            }
+        }
+        Ok(out)
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn next_sequence_number_in_batch(
&self,
addresses: Vec<AccountAddress>,
) -> Result<Vec<(AccountAddress, Option<u64>)>> {
let seq_numbers = self
.client
.next_sequence_number_in_batch(addresses)?
.ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
Ok(seq_numbers
.into_iter()
.map(|(address, seq_number)| match seq_number {
Some(seq_number) => (address, Some(seq_number)),
None => {
let state_reader = self
.client
.state_reader(StateRootOption::Latest)
.expect("state_reader error");
let account_resource = state_reader
.get_account_resource(address)
.expect("get_account_resource error");
let seq = account_resource.map(|resource| resource.sequence_number());
(address, seq)
}
})
.collect())
}
fn next_sequence_number_in_batch(
&self,
addresses: Vec<AccountAddress>,
) -> Result<Vec<(AccountAddress, Option<u64>)>> {
let seq_numbers = self
.client
.next_sequence_number_in_batch(addresses.clone())?
.ok_or_else(|| format_err!("next_sequence_number_in_batch error"))?;
let state_reader = self.client.state_reader(StateRootOption::Latest)?;
let mut out = Vec::with_capacity(seq_numbers.len());
for (address, seq_number) in seq_numbers {
if let Some(seq) = seq_number {
out.push((address, Some(seq)));
} else {
let seq = state_reader
.get_account_resource(address)?
.map(|resource| resource.sequence_number());
out.push((address, seq));
}
}
Ok(out)
}

Comment on lines +352 to +380
impl EventHandler<Self, BlockTemplateResponse> for MinerService {
fn handle_event(
&mut self,
block_template: BlockTemplateResponse,
ctx: &mut ServiceContext<Self>,
) {
let config = self.config.clone();
let addr = ctx
.service_ref::<Self>()
.expect("failed to get miner service ref")
.clone();
let parent = block_template.parent;
let block_template = block_template.template;
let block_time_gap = block_template.timestamp - parent.timestamp();

ctx.spawn(async move {
let should_skip =
config.miner.is_disable_mint_empty_block() && block_time_gap < 3600 * 1000;
if should_skip {
info!("Skipping minting empty block");
} else if let Err(e) = addr
.send(DispatchMintBlockTemplate { block_template })
.await
{
warn!("Failed to dispatch block template: {}", e);
}
});
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix skipping logic and time-gap subtraction in BlockTemplateResponse handler.

  • The empty-block check was dropped. As written, the miner will skip minting even when the template contains transactions if disable_mint_empty_block is set and the time gap is small. That’s a functional regression.
  • block_template.timestamp - parent.timestamp() can underflow; use saturating_sub.

Apply this diff to restore intended behavior and avoid underflow:

     fn handle_event(
         &mut self,
         block_template: BlockTemplateResponse,
         ctx: &mut ServiceContext<Self>,
     ) {
         let config = self.config.clone();
         let addr = ctx
             .service_ref::<Self>()
             .expect("failed to get miner service ref")
             .clone();
         let parent = block_template.parent;
         let block_template = block_template.template;
-        let block_time_gap = block_template.timestamp - parent.timestamp();
+        let block_time_gap = block_template.timestamp.saturating_sub(parent.timestamp());
 
         ctx.spawn(async move {
-            let should_skip =
-                config.miner.is_disable_mint_empty_block() && block_time_gap < 3600 * 1000;
+            let should_skip = config.miner.is_disable_mint_empty_block()
+                && block_template.body.transactions.is_empty()
+                && block_time_gap < 3600 * 1000;
             if should_skip {
                 info!("Skipping minting empty block");
             } else if let Err(e) = addr
                 .send(DispatchMintBlockTemplate { block_template })
                 .await
             {
                 warn!("Failed to dispatch block template: {}", e);
             }
         });
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
impl EventHandler<Self, BlockTemplateResponse> for MinerService {
fn handle_event(
&mut self,
block_template: BlockTemplateResponse,
ctx: &mut ServiceContext<Self>,
) {
let config = self.config.clone();
let addr = ctx
.service_ref::<Self>()
.expect("failed to get miner service ref")
.clone();
let parent = block_template.parent;
let block_template = block_template.template;
let block_time_gap = block_template.timestamp - parent.timestamp();
ctx.spawn(async move {
let should_skip =
config.miner.is_disable_mint_empty_block() && block_time_gap < 3600 * 1000;
if should_skip {
info!("Skipping minting empty block");
} else if let Err(e) = addr
.send(DispatchMintBlockTemplate { block_template })
.await
{
warn!("Failed to dispatch block template: {}", e);
}
});
}
}
impl EventHandler<Self, BlockTemplateResponse> for MinerService {
fn handle_event(
&mut self,
block_template: BlockTemplateResponse,
ctx: &mut ServiceContext<Self>,
) {
let config = self.config.clone();
let addr = ctx
.service_ref::<Self>()
.expect("failed to get miner service ref")
.clone();
let parent = block_template.parent;
let block_template = block_template.template;
let block_time_gap = block_template.timestamp.saturating_sub(parent.timestamp());
ctx.spawn(async move {
let should_skip = config.miner.is_disable_mint_empty_block()
&& block_template.body.transactions.is_empty()
&& block_time_gap < 3600 * 1000;
if should_skip {
info!("Skipping minting empty block");
} else if let Err(e) = addr
.send(DispatchMintBlockTemplate { block_template })
.await
{
warn!("Failed to dispatch block template: {}", e);
}
});
}
}
🤖 Prompt for AI Agents
In miner/src/lib.rs around lines 352 to 380, the handler dropped the empty-block
check and used a subtraction that can underflow; restore the original logic so
we only skip minting when the template contains no transactions AND
config.miner.is_disable_mint_empty_block() is true, and compute the time gap
using saturating_sub (e.g.,
block_template.timestamp.saturating_sub(parent.timestamp())) before comparing to
the threshold; keep the spawn/dispatch flow and ensure the condition checks both
emptiness and the time gap to avoid the regression and underflow.

Comment on lines 4 to 93
// use starcoin_account_service::AccountService;
// use starcoin_config::NodeConfig;
// use starcoin_consensus::Consensus;
// use starcoin_dag::service::pruning_point_service::PruningPointService;
// use starcoin_genesis::Genesis;
// use starcoin_miner::{
// BlockBuilderService, BlockHeaderExtra, BlockTemplateRequest, MinerService, NewHeaderChannel,
// NewHeaderService, SubmitSealRequest,
// };
// use starcoin_service_registry::{RegistryAsyncService, RegistryService};
// use starcoin_storage::BlockStore;
// use starcoin_sync::block_connector::BlockConnectorService;
// use starcoin_txpool::TxPoolService;
// use starcoin_types::{system_events::GenerateBlockEvent, U256};
// use std::sync::Arc;
// use std::time::Duration;
// use tokio::time::sleep;

#[stest::test]
async fn test_miner_service() {
let mut config = NodeConfig::random_for_dag_test();
config.miner.disable_mint_empty_block = Some(false);
let registry = RegistryService::launch();
let node_config = Arc::new(config.clone());
registry.put_shared(node_config.clone()).await.unwrap();
let (storage, _chain_info, genesis, dag) =
Genesis::init_storage_for_test(config.net()).unwrap();
registry.put_shared(storage.clone()).await.unwrap();
registry.put_shared(dag).await.unwrap();
// #[stest::test]
// async fn test_miner_service() {
// let mut config = NodeConfig::random_for_dag_test();
// config.miner.disable_mint_empty_block = Some(false);
// let registry = RegistryService::launch();
// let node_config = Arc::new(config.clone());
// registry.put_shared(node_config.clone()).await.unwrap();
// let (storage, _chain_info, genesis, dag) =
// Genesis::init_storage_for_test(config.net()).unwrap();
// registry.put_shared(storage.clone()).await.unwrap();
// registry.put_shared(dag).await.unwrap();

let genesis_hash = genesis.block().id();
registry.put_shared(genesis).await.unwrap();
let chain_header = storage
.get_block_header_by_hash(genesis_hash)
.unwrap()
.unwrap();
// let genesis_hash = genesis.block().id();
// registry.put_shared(genesis).await.unwrap();
// let chain_header = storage
// .get_block_header_by_hash(genesis_hash)
// .unwrap()
// .unwrap();

let txpool = TxPoolService::new(node_config.clone(), storage.clone(), chain_header, None);
registry.put_shared(txpool).await.unwrap();
registry
.register_mocker(AccountService::mock().unwrap())
.await
.unwrap();
// let txpool = TxPoolService::new(node_config.clone(), storage.clone(), chain_header, None);
// registry.put_shared(txpool).await.unwrap();
// registry
// .register_mocker(AccountService::mock().unwrap())
// .await
// .unwrap();

registry.register::<PruningPointService>().await.unwrap();
// registry.register::<PruningPointService>().await.unwrap();

registry
.register::<BlockConnectorService<TxPoolService>>()
.await
.unwrap();
// registry
// .register::<BlockConnectorService<TxPoolService>>()
// .await
// .unwrap();

registry.put_shared(NewHeaderChannel::new()).await.unwrap();
registry.register::<NewHeaderService>().await.unwrap();
let template = registry.register::<BlockBuilderService>().await.unwrap();
let response = template
.send(BlockTemplateRequest)
.await
.unwrap()
.unwrap()
.template;
assert_eq!(response.number, 1);
// registry.put_shared(NewHeaderChannel::new()).await.unwrap();
// registry.register::<NewHeaderService>().await.unwrap();
// let template = registry.register::<BlockBuilderService>().await.unwrap();
// let response = template
// .send(BlockTemplateRequest)
// .await
// .unwrap()
// .unwrap()
// .template;
// assert_eq!(response.number, 1);

let miner = registry.register::<MinerService>().await;
assert!(miner.is_ok());
// let miner = registry.register::<MinerService>().await;
// assert!(miner.is_ok());

let miner = miner.unwrap();
miner.notify(GenerateBlockEvent::new_break(false)).unwrap();
// let miner = miner.unwrap();
// miner.notify(GenerateBlockEvent::new_break(false)).unwrap();

sleep(Duration::from_millis(200)).await;
miner.notify(GenerateBlockEvent::new_break(true)).unwrap();
sleep(Duration::from_millis(200)).await;
// Generate a event
let diff = U256::from(1024);
let minting_blob = vec![0u8; 76];
// sleep(Duration::from_millis(200)).await;
// miner.notify(GenerateBlockEvent::new_break(true)).unwrap();
// sleep(Duration::from_millis(200)).await;
// // Generate a event
// let diff = U256::from(1024);
// let minting_blob = vec![0u8; 76];

let nonce = config
.net()
.genesis_config()
.consensus()
.solve_consensus_nonce(&minting_blob, diff, config.net().time_service().as_ref());
miner
.try_send(SubmitSealRequest::new(
minting_blob,
nonce,
BlockHeaderExtra::new([0u8; 4]),
))
.unwrap();
// let nonce = config
// .net()
// .genesis_config()
// .consensus()
// .solve_consensus_nonce(&minting_blob, diff, config.net().time_service().as_ref());
// miner
// .try_send(SubmitSealRequest::new(
// minting_blob,
// nonce,
// BlockHeaderExtra::new([0u8; 4]),
// ))
// .unwrap();

registry.shutdown_system().await.unwrap();
}
// registry.shutdown_system().await.unwrap();
// }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

All miner service tests are commented out — restores needed to prevent coverage/regression gaps

The entire test body is disabled via comments. This removes coverage for the miner pipeline and block template creation, risking regressions going unnoticed (especially given the recent async/event-driven refactor).

Action options:

  • Port the test to the new channel-driven template flow and MinerService notifications.
  • If temporarily unstable, mark with #[ignore] and track with an issue rather than commenting out.
  • If obsolete, delete the file to avoid dead code drift.

Proposed minimal restoration targeting the new flow (adjust types/names as per the refactor):

-// #[stest::test]
-// async fn test_miner_service() {
-//     ...
-//     let template = registry.register::<BlockBuilderService>().await.unwrap();
-//     let response = template
-//         .send(BlockTemplateRequest)
-//         .await
-//         .unwrap()
-//         .unwrap()
-//         .template;
-//     assert_eq!(response.number, 1);
-//     ...
-// }
+#[stest::test]
+async fn test_miner_service_template_smoke() {
+    // Arrange
+    let mut config = NodeConfig::random_for_dag_test();
+    config.miner.disable_mint_empty_block = Some(false);
+    let registry = RegistryService::launch();
+    let node_config = Arc::new(config.clone());
+    registry.put_shared(node_config.clone()).await.unwrap();
+    let (storage, _chain_info, genesis, dag) = Genesis::init_storage_for_test(config.net()).unwrap();
+    registry.put_shared(storage.clone()).await.unwrap();
+    registry.put_shared(dag).await.unwrap();
+    registry.put_shared(genesis).await.unwrap();
+
+    // Register new header/template pipeline
+    registry.put_shared(NewHeaderChannel::new()).await.unwrap();
+    registry.register::<NewHeaderService>().await.unwrap();
+    let builder = registry.register::<BlockBuilderService>().await.unwrap();
+
+    // Act
+    let resp = builder.send(BlockTemplateRequest).await.unwrap().unwrap();
+
+    // Assert
+    assert_eq!(resp.template.number, 1);
+    registry.shutdown_system().await.unwrap();
+}

If you’d like, I can open a tracking issue and help adapt this to the exact new API surface.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In miner/tests/miner_test.rs around lines 4-93 the entire async miner
integration test is commented out which removes coverage; restore the test by
re-enabling it and adapting to the current API surface (uncomment and update
imports, types and service names to match the refactor), or if it is currently
unstable, re-enable as a test with #[stest::test] and #[ignore] and create a
short TODO/issue reference in the file; specifically: uncomment the test body,
update NodeConfig/Genesis/Registry/Service type names and channel calls to the
new channel-driven block template and MinerService APIs, ensure async runtime
attributes remain, and either run and fix compile errors or mark #[ignore] and
add an issue number before committing.

Comment on lines +450 to +461
pub fn account_unlock_in_batch(
&self,
batch: Vec<(AccountAddress, String)>,
duration: std::time::Duration,
) -> anyhow::Result<Vec<AccountInfo>> {
self.call_rpc_blocking(|inner| {
inner
.account_client
.unlock_in_batch(batch, Some(duration.as_secs() as u32))
})
.map_err(map_err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against u64→u32 truncation for unlock duration.

Casting with as u32 may silently truncate for large durations. Clamp to u32::MAX or error out.

Apply this diff to clamp safely:

-        self.call_rpc_blocking(|inner| {
-            inner
-                .account_client
-                .unlock_in_batch(batch, Some(duration.as_secs() as u32))
-        })
+        self.call_rpc_blocking(|inner| {
+            let secs = duration.as_secs().min(u32::MAX as u64) as u32;
+            inner
+                .account_client
+                .unlock_in_batch(batch, Some(secs))
+        })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn account_unlock_in_batch(
&self,
batch: Vec<(AccountAddress, String)>,
duration: std::time::Duration,
) -> anyhow::Result<Vec<AccountInfo>> {
self.call_rpc_blocking(|inner| {
inner
.account_client
.unlock_in_batch(batch, Some(duration.as_secs() as u32))
})
.map_err(map_err)
}
pub fn account_unlock_in_batch(
&self,
batch: Vec<(AccountAddress, String)>,
duration: std::time::Duration,
) -> anyhow::Result<Vec<AccountInfo>> {
self.call_rpc_blocking(|inner| {
let secs = duration.as_secs().min(u32::MAX as u64) as u32;
inner
.account_client
.unlock_in_batch(batch, Some(secs))
})
.map_err(map_err)
}
🤖 Prompt for AI Agents
In rpc/client/src/lib.rs around lines 450 to 461, the direct cast
duration.as_secs() as u32 can silently truncate large durations; instead compute
let secs = duration.as_secs(); clamp it to u32::MAX if secs > u32::MAX (e.g. let
secs_u32 = if secs > u64::from(u32::MAX) { u32::MAX } else { secs as u32 });
then pass Some(secs_u32) to unlock_in_batch so no silent truncation occurs
(alternatively return an error when secs exceeds u32::MAX if you prefer failing
fast).

Comment on lines +46 to +60
fn submit_transactions(
&self,
txns: Vec<SignedUserTransaction>,
) -> FutureResult<Vec<HashValue>> {
let txn_hashes = txns.iter().map(|txn| txn.id()).collect();
let result: Result<(), jsonrpc_core::Error> = self
.service
.add_txns(txns)
.pop()
.expect("txpool should return result")
.map_err(convert_to_rpc_error);

Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Batch submit ignores per-transaction errors; only the last result is checked.

Using pop() validates just the last txn. Earlier failures will be silently acknowledged, and all hashes are returned as “accepted.”

Apply this diff to surface the first error and only succeed when all txns are accepted:

     fn submit_transactions(
         &self,
         txns: Vec<SignedUserTransaction>,
     ) -> FutureResult<Vec<HashValue>> {
-        let txn_hashes = txns.iter().map(|txn| txn.id()).collect();
-        let result: Result<(), jsonrpc_core::Error> = self
-            .service
-            .add_txns(txns)
-            .pop()
-            .expect("txpool should return result")
-            .map_err(convert_to_rpc_error);
-
-        Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
+        if txns.is_empty() {
+            return Box::pin(futures::future::ok(Vec::new()));
+        }
+        let txn_hashes: Vec<HashValue> = txns.iter().map(|txn| txn.id()).collect();
+        let results = self.service.add_txns(txns);
+        // If any error exists, return the first mapped error.
+        let first_err = results.into_iter().find_map(|r| r.err());
+        let result: Result<(), jsonrpc_core::Error> = match first_err {
+            Some(err) => Err(convert_to_rpc_error(err)),
+            None => Ok(()),
+        };
+        Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
     }

Follow-up: add a unit test that mixes valid/invalid txns and asserts the RPC returns an error rather than partial success. I can draft the test scaffolding if helpful.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn submit_transactions(
&self,
txns: Vec<SignedUserTransaction>,
) -> FutureResult<Vec<HashValue>> {
let txn_hashes = txns.iter().map(|txn| txn.id()).collect();
let result: Result<(), jsonrpc_core::Error> = self
.service
.add_txns(txns)
.pop()
.expect("txpool should return result")
.map_err(convert_to_rpc_error);
Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
}
fn submit_transactions(
&self,
txns: Vec<SignedUserTransaction>,
) -> FutureResult<Vec<HashValue>> {
if txns.is_empty() {
return Box::pin(futures::future::ok(Vec::new()));
}
let txn_hashes: Vec<HashValue> = txns.iter().map(|txn| txn.id()).collect();
let results = self.service.add_txns(txns);
// If any error exists, return the first mapped error.
let first_err = results.into_iter().find_map(|r| r.err());
let result: Result<(), jsonrpc_core::Error> = match first_err {
Some(err) => Err(convert_to_rpc_error(err)),
None => Ok(()),
};
Box::pin(futures::future::ready(result.map(|_| txn_hashes)))
}
🤖 Prompt for AI Agents
In rpc/server/src/module/txpool_rpc.rs around lines 46 to 60, the current batch
submit uses .pop() and only inspects the last transaction result so earlier
per-txn errors are ignored; change the logic to collect all results from
self.service.add_txns(txns), check them in order and if any entry is an Err
convert and return that first error as a jsonrpc_core::Error, otherwise return
the full list of txn_hashes; ensure you do not consume only the last element
(iterate or find on the collection), propagate the first encountered error via
convert_to_rpc_error, and update the function to only succeed when all txns are
accepted; also add a unit test that submits a mix of valid and invalid
transactions and asserts the RPC returns an error (no partial success).

Comment on lines +67 to 71
fn next_sequence_number_with_state(
&self,
address: AccountAddress,
header: &BlockHeader,
state_root: HashValue,
) -> Option<u64>;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Ensure cross-file naming/metrics align with “with_state.”

Elsewhere, timers/labels still reference “…with_header” per txpool/src/tx_pool_service_impl.rs. Align labels for consistency and searchability.

Run this to locate lingering “with_header” labels/usages:


🏁 Script executed:

#!/bin/bash
rg -nP -C2 'next_sequence_number_with_header|get_pending_with_header|with_header' --hidden

Length of output: 2629


Align “with_state” naming across API and service implementation

The API defines:

fn next_sequence_number_with_state(
    &self,
    address: AccountAddress,
    state_root: HashValue,
) -> Option<u64>;

but in txpool/src/tx_pool_service_impl.rs the service and its metrics still reference with_header. To maintain consistency and improve discoverability, all “with_header” identifiers should be renamed to “with_state”:

• In txpool/src/tx_pool_service_impl.rs around line 203:

  • Change the metrics label from
    .with_label_values(&["next_sequence_number_with_header"])
    to
    .with_label_values(&["next_sequence_number_with_state"]).
  • Update the call site
    .next_sequence_number_with_header(address, state_root)
    to
    .next_sequence_number_with_state(address, state_root)
    ```.

• Immediately below (around line 393):

  • Rename the function signature
    pub(crate) fn next_sequence_number_with_header(
        &self,
        address: AccountAddress,
        state_root: HashValue,
    ) -> Option<u64> {}
    to
    pub(crate) fn next_sequence_number_with_state(
        &self,
        address: AccountAddress,
        state_root: HashValue,
    ) -> Option<u64> {}
    ```.

This ensures that method names and metric labels remain aligned with the API and simplifies future search and maintenance.

🤖 Prompt for AI Agents
In txpool/src/tx_pool_service_impl.rs around line 203 and again around line 393,
the service and metrics still use "with_header" while the API uses "with_state";
update the metric label call at ~line 203 from
.with_label_values(&["next_sequence_number_with_header"]) to
.with_label_values(&["next_sequence_number_with_state"]), change the call site
.next_sequence_number_with_header(address, state_root) to
.next_sequence_number_with_state(address, state_root), and rename the function
at ~line 393 from pub(crate) fn next_sequence_number_with_header(...) to
pub(crate) fn next_sequence_number_with_state(...) keeping the parameters and
body unchanged so names align with the API.

@github-actions
Copy link

Benchmark for 951a57e

Click to view benchmark
Test Base PR %
accumulator_append 884.0±193.74µs 1177.4±226.17µs +33.19%
block_apply/block_apply_10 480.1±63.65ms 472.5±76.50ms -1.58%
block_apply/block_apply_1000 50.2±2.78s 46.4±1.12s -7.57%
get_with_proof/db_store 49.3±8.48µs 61.3±10.61µs +24.34%
get_with_proof/mem_store 38.2±2.95µs 48.5±11.42µs +26.96%
put_and_commit/db_store/1 162.7±40.52µs 207.2±17.51µs +27.35%
put_and_commit/db_store/10 1386.8±267.69µs 1198.8±183.57µs -13.56%
put_and_commit/db_store/100 10.7±1.67ms 10.9±1.86ms +1.87%
put_and_commit/db_store/5 869.9±171.90µs 787.9±132.35µs -9.43%
put_and_commit/db_store/50 5.9±0.65ms 7.0±1.38ms +18.64%
put_and_commit/mem_store/1 120.3±11.27µs 120.5±10.62µs +0.17%
put_and_commit/mem_store/10 821.6±167.78µs 700.1±89.87µs -14.79%
put_and_commit/mem_store/100 7.5±1.35ms 7.9±1.54ms +5.33%
put_and_commit/mem_store/5 367.6±62.85µs 371.1±94.73µs +0.95%
put_and_commit/mem_store/50 3.6±0.68ms 3.4±0.34ms -5.56%
query_block/query_block_in(10)_times(100) 5.0±0.44ms 5.7±1.06ms +14.00%
query_block/query_block_in(10)_times(1000) 48.2±2.56ms 49.7±5.20ms +3.11%
query_block/query_block_in(10)_times(10000) 507.2±55.84ms 490.1±41.68ms -3.37%
query_block/query_block_in(1000)_times(100) 1180.7±176.81µs 1694.8±263.32µs +43.54%
query_block/query_block_in(1000)_times(1000) 14.2±3.25ms 17.2±2.81ms +21.13%
query_block/query_block_in(1000)_times(10000) 164.2±29.67ms 203.5±21.85ms +23.93%
storage_transaction 1087.7±444.82µs 1117.3±416.73µs +2.72%
vm/transaction_execution/1 449.9±38.35ms 538.1±55.44ms +19.60%
vm/transaction_execution/10 144.2±9.62ms 156.8±13.69ms +8.74%
vm/transaction_execution/20 132.8±8.84ms 142.9±12.30ms +7.61%
vm/transaction_execution/5 176.4±12.15ms 192.8±13.73ms +9.30%
vm/transaction_execution/50 153.4±20.26ms 167.5±10.44ms +9.19%

@github-actions
Copy link

Benchmark for 64b1537

Click to view benchmark
Test Base PR %
accumulator_append 822.4±124.88µs 817.9±96.05µs -0.55%
block_apply/block_apply_10 460.0±70.80ms 378.2±21.81ms -17.78%
block_apply/block_apply_1000 46.6±0.85s 47.6±2.05s +2.15%
get_with_proof/db_store 46.5±4.93µs 52.5±7.20µs +12.90%
get_with_proof/mem_store 38.0±5.23µs 49.3±8.57µs +29.74%
put_and_commit/db_store/1 145.5±35.23µs 131.2±27.14µs -9.83%
put_and_commit/db_store/10 1262.2±148.45µs 1131.0±137.35µs -10.39%
put_and_commit/db_store/100 14.2±3.61ms 10.5±0.94ms -26.06%
put_and_commit/db_store/5 634.5±88.59µs 678.8±124.04µs +6.98%
put_and_commit/db_store/50 5.4±0.43ms 5.7±0.54ms +5.56%
put_and_commit/mem_store/1 73.1±9.41µs 77.7±16.25µs +6.29%
put_and_commit/mem_store/10 736.9±103.97µs 680.3±65.35µs -7.68%
put_and_commit/mem_store/100 7.2±0.79ms 7.6±1.32ms +5.56%
put_and_commit/mem_store/5 355.3±40.23µs 500.2±107.21µs +40.78%
put_and_commit/mem_store/50 3.4±0.33ms 3.6±0.56ms +5.88%
query_block/query_block_in(10)_times(100) 4.9±0.48ms 5.3±0.83ms +8.16%
query_block/query_block_in(10)_times(1000) 56.6±9.19ms 53.6±9.60ms -5.30%
query_block/query_block_in(10)_times(10000) 487.4±43.77ms 475.7±44.00ms -2.40%
query_block/query_block_in(1000)_times(100) 1137.7±131.54µs 1321.0±273.18µs +16.11%
query_block/query_block_in(1000)_times(1000) 11.9±1.59ms 11.9±1.31ms 0.00%
query_block/query_block_in(1000)_times(10000) 132.4±27.76ms 119.1±20.84ms -10.05%
storage_transaction 1070.3±368.48µs 1276.8±541.65µs +19.29%
vm/transaction_execution/1 466.7±69.83ms 429.9±19.95ms -7.89%
vm/transaction_execution/10 141.0±8.51ms 139.2±4.59ms -1.28%
vm/transaction_execution/20 133.4±11.16ms 144.5±15.77ms +8.32%
vm/transaction_execution/5 171.4±10.19ms 174.5±14.57ms +1.81%
vm/transaction_execution/50 155.5±20.24ms 161.9±11.97ms +4.12%

@github-actions
Copy link

github-actions bot commented Sep 2, 2025

Benchmark for f90a031

Click to view benchmark
Test Base PR %
accumulator_append 918.7±193.74µs 837.6±131.15µs -8.83%
block_apply/block_apply_10 406.9±29.69ms 524.2±109.60ms +28.83%
block_apply/block_apply_1000 49.4±2.93s 45.4±1.87s -8.10%
get_with_proof/db_store 47.8±4.13µs 47.3±4.50µs -1.05%
get_with_proof/mem_store 38.7±3.13µs 40.2±5.81µs +3.88%
put_and_commit/db_store/1 139.1±20.55µs 153.1±26.12µs +10.06%
put_and_commit/db_store/10 1245.8±232.93µs 1199.8±221.94µs -3.69%
put_and_commit/db_store/100 11.5±2.08ms 10.9±1.35ms -5.22%
put_and_commit/db_store/5 558.3±29.64µs 710.7±92.72µs +27.30%
put_and_commit/db_store/50 5.5±0.65ms 5.4±0.61ms -1.82%
put_and_commit/mem_store/1 79.1±19.93µs 74.8±8.69µs -5.44%
put_and_commit/mem_store/10 1104.1±160.63µs 706.7±79.61µs -35.99%
put_and_commit/mem_store/100 6.8±0.78ms 6.7±0.44ms -1.47%
put_and_commit/mem_store/5 359.8±48.75µs 349.6±52.10µs -2.83%
put_and_commit/mem_store/50 3.7±0.99ms 3.4±0.35ms -8.11%
query_block/query_block_in(10)_times(100) 5.2±0.65ms 4.7±0.36ms -9.62%
query_block/query_block_in(10)_times(1000) 52.5±6.68ms 49.2±5.76ms -6.29%
query_block/query_block_in(10)_times(10000) 489.6±35.95ms 477.6±18.26ms -2.45%
query_block/query_block_in(1000)_times(100) 1434.5±328.95µs 2.1±0.28ms +46.39%
query_block/query_block_in(1000)_times(1000) 13.6±2.89ms 12.9±2.55ms -5.15%
query_block/query_block_in(1000)_times(10000) 128.9±16.69ms 122.6±23.82ms -4.89%
storage_transaction 1112.3±409.72µs 1217.5±513.06µs +9.46%
vm/transaction_execution/1 448.1±32.20ms 449.2±35.89ms +0.25%
vm/transaction_execution/10 151.4±12.29ms 187.1±31.62ms +23.58%
vm/transaction_execution/20 142.1±9.71ms 130.7±3.58ms -8.02%
vm/transaction_execution/5 189.4±13.49ms 171.9±7.81ms -9.24%
vm/transaction_execution/50 163.3±10.15ms 158.7±12.55ms -2.82%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants