Skip to content
This repository was archived by the owner on Jan 22, 2025. It is now read-only.

Conversation

@AshwinSekar
Copy link
Contributor

@AshwinSekar AshwinSekar commented Aug 23, 2023

Problem

After restart, any slots that were previously marked duplicate will not be marked invalid in fork choice.

Summary of Changes

When processing the result of replay (dead or frozen), also check if blockstore contains a duplicate proof. If it does, let the state machine know.

Split from #32862

@AshwinSekar AshwinSekar marked this pull request as ready for review August 23, 2023 22:33
@AshwinSekar AshwinSekar requested a review from carllin August 23, 2023 22:33
@codecov
Copy link

codecov bot commented Aug 23, 2023

Codecov Report

Merging #32962 (b83b1e2) into master (fb1ba21) will decrease coverage by 0.1%.
The diff coverage is 20.0%.

❗ Current head b83b1e2 differs from pull request most recent head 33a3711. Consider uploading reports for the commit 33a3711 to get more accurate results

@@            Coverage Diff            @@
##           master   #32962     +/-   ##
=========================================
- Coverage    82.0%    82.0%   -0.1%     
=========================================
  Files         785      784      -1     
  Lines      210819   212607   +1788     
=========================================
+ Hits       173055   174423   +1368     
- Misses      37764    38184    +420     

@AshwinSekar AshwinSekar force-pushed the duplicate-shred-proof-replay branch 2 times, most recently from d0d056b to c3fbcd0 Compare August 30, 2023 22:18
Comment on lines +1239 to +1242
let duplicate_slots = blockstore
.duplicate_slots_iterator(bank_forks.root_bank().slot())
.unwrap();
let duplicate_slot_hashes = duplicate_slots
.filter_map(|slot| bank_forks.bank_hash(slot).map(|hash| (slot, hash)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be moved outside of this block scope using just the returned root_bank, since it doesn't rely on holding the bank_forks lock

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You still need the bank forks lock to filter get the hashes for the duplicate slots:
.filter_map(|slot| bank_forks.bank_hash(slot).map(|hash| (slot, hash)));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yeah, I'm dumb, ignore this

Comment on lines 5623 to 5634

if blockstore.is_full(child) {
let shreds = blockstore
.get_data_shreds_for_slot(child, 0)
.expect("Child is full");
let mut our_block = true;
for shred in shreds {
our_block &= shred.verify(&target_pubkey);
}
if our_block {
done = true;
}
checked_children.insert(child);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still might build a block off the duplicate block instead of the parent in the new proposed change to deal with the votes not landing right?

Would it be better to check that our next vote is not on the duplicate fork heaviest (have to guarantee that the duplicate fork is the heaviest though, i.e. votes have landed on it)?

Copy link
Contributor Author

@AshwinSekar AshwinSekar Sep 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even with the new proposed change, if you have an invalid fork with no alternative valid fork you will always create a new fork, which is the case here since we are only 1 fork. So although after we create the new fork we might go back to building off the duplicate fork, since we check all children this will still work.

Would it be better to check that our next vote is not on the duplicate fork heaviest (have to guarantee that the duplicate fork is the heaviest though, i.e. votes have landed on it)?

This could be possible although it would require a really complicated setup, because as is the validator would not be able to ever switch onto the non duplicate fork, so we would need a switching threshold stake to both be not on the duplicate fork but also not have created their own fork already.
I'm don't think this setup would test the intended code path in a more rigorous way, and is probably better for a duplicate switch rollback test.

@AshwinSekar AshwinSekar force-pushed the duplicate-shred-proof-replay branch from c3fbcd0 to a06d82a Compare September 2, 2023 04:49
@AshwinSekar AshwinSekar requested a review from carllin September 2, 2023 04:50
@AshwinSekar AshwinSekar force-pushed the duplicate-shred-proof-replay branch from a06d82a to 33a3711 Compare September 2, 2023 05:22
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

3 participants