Skip to content
This repository was archived by the owner on Nov 15, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
0e48dc9
Fix overlay prefix removal result
gavofyork May 18, 2022
ae9b285
Second part of the overlay prefix removal fix.
gavofyork May 18, 2022
7f80f12
Report only items deleted from storage in clear_prefix
gavofyork May 20, 2022
3e580e5
Fix kill_prefix
gavofyork May 20, 2022
4a5ff62
Formatting
gavofyork May 20, 2022
8d56c13
Remove unused code
gavofyork May 20, 2022
91362cd
Fixes
gavofyork May 20, 2022
4767538
Fixes
gavofyork May 20, 2022
08b898c
Introduce clear_prefix host function v3
gavofyork May 20, 2022
aee7993
Formatting
gavofyork May 20, 2022
a1982d1
Use v2 for now
gavofyork May 20, 2022
ed26985
Fixes
gavofyork May 20, 2022
72f8fdd
Formatting
gavofyork May 20, 2022
d97aa89
Docs
gavofyork May 20, 2022
f4bbf8d
Child prefix removal should also hide v3 for now
gavofyork May 20, 2022
e82b1ca
Fixes
gavofyork May 20, 2022
73294b0
Fixes
gavofyork May 20, 2022
8a8efb1
Formatting
gavofyork May 20, 2022
a626d54
Fixes
gavofyork May 21, 2022
6867534
apply_to_keys_whle takes start_at
gavofyork May 21, 2022
94e885e
apply_to_keys_whle takes start_at
gavofyork May 21, 2022
2566a4d
apply_to_keys_whle takes start_at
gavofyork May 23, 2022
a71f9a0
Cursor API; force limits
gavofyork May 23, 2022
f435107
Use unsafe deprecated functions
gavofyork May 23, 2022
bc181a0
Formatting
gavofyork May 23, 2022
9317f6a
Merge remote-tracking branch 'origin/master' into gav-clear-prefix-v2
gavofyork May 23, 2022
0e51dbe
Fixes
gavofyork May 23, 2022
ee53604
Grumbles
gavofyork May 25, 2022
1fb2ef5
Fixes
gavofyork May 25, 2022
0338fce
Docs
gavofyork May 25, 2022
73d1fc6
Some nitpicks :see_no_evil:
bkchr May 25, 2022
ef7f475
Update primitives/externalities/src/lib.rs
gavofyork May 25, 2022
7db73b1
Formatting
gavofyork May 25, 2022
6aeb3a3
Fixes
KiChjang May 25, 2022
4692d06
cargo fmt
KiChjang May 25, 2022
7e8a8c2
Fixes
KiChjang May 25, 2022
c22b924
Fixes
gavofyork May 25, 2022
edf7ce0
Update primitives/io/src/lib.rs
gavofyork May 25, 2022
33224f2
Formatting
gavofyork May 25, 2022
a5f349b
Fixes
gavofyork May 26, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Cursor API; force limits
  • Loading branch information
gavofyork committed May 23, 2022
commit a71f9a08c46a78507f598ee4f5aa3b6cb35ad4e3
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions frame/support/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1097,12 +1097,12 @@ pub mod tests {
DoubleMap::insert(&(key1 + 1), &key2, &4u64);
DoubleMap::insert(&(key1 + 1), &(key2 + 1), &4u64);
assert!(matches!(
DoubleMap::clear_prefix(&key1, None),
DoubleMap::clear_prefix(&key1, u32::max_value(), None),
// Note this is the incorrect answer (for now), since we are using v2 of
// `clear_prefix`.
// When we switch to v3, then this will become:
// sp_io::ClearPrefixResult::NoneLeft { db: 0, total: 2 },
sp_io::ClearPrefixResult::NoneLeft { db: 0, total: 0 },
sp_io::ClearPrefixResult { maybe_cursor: None, db: 0, total: 0, loops: 0 },
));
assert_eq!(DoubleMap::get(&key1, &key2), 0u64);
assert_eq!(DoubleMap::get(&key1, &(key2 + 1)), 0u64);
Expand Down
55 changes: 36 additions & 19 deletions frame/support/src/storage/child.rs
Original file line number Diff line number Diff line change
Expand Up @@ -144,34 +144,51 @@ pub fn kill_storage(child_info: &ChildInfo, limit: Option<u32>) -> KillStorageRe
}
}

/// Remove all `storage_key` key/values
/// Partially clear the child storage of each key-value pair.
///
/// Deletes all keys from the overlay and up to `limit` keys from the backend if
/// it is set to `Some`. No limit is applied when `limit` is set to `None`.
/// # Limit
///
/// The limit can be used to partially delete a child trie in case it is too large
/// to delete in one go (block).
/// A *limit* should always be provided through `maybe_limit`. This is one fewer than the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any use cases where it makes sense to pass None to maybe_limit? If not, it sounds like we should just not make it an Option.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth keeping the host functions general for now. Can't predict future uses quite yet.

/// maximum number of backend iterations which may be done by this operation and as such
/// represents the maximum number of backend deletions which may happen. A *limit* of zero
/// implies that no keys will be deleted, through there may be a single iteration done.
///
/// # Note
/// The limit can be used to partially delete storage items in case it is too large or costly
/// to delete all in a single operation.
///
/// Please note that keys that are residing in the overlay for that child trie when
/// issuing this call are all deleted without counting towards the `limit`. Only keys
/// written during the current block are part of the overlay. Deleting with a `limit`
/// mostly makes sense with an empty overlay for that child trie.
/// # Cursor
///
/// Calling this function multiple times per block for the same `storage_key` does
/// not make much sense because it is not cumulative when called inside the same block.
/// Use this function to distribute the deletion of a single child trie across multiple
/// blocks.
pub fn clear_storage(child_info: &ChildInfo, limit: Option<u32>) -> ClearPrefixResult {
/// A *cursor* may be passed in to this operation with `maybe_cursor`. `None` should only be
/// passed once (in the initial call) for any attempt to clear storage. Subsequent calls
/// operating on the same prefix should always pass `Some`, and this should be equal to the
/// previous call result's `maybe_prefix` field.
///
/// Returns [`ClearPrefixResult`] to inform about the result. Once the resultant `maybe_prefix`
/// field is `None`, then no further items remain to be deleted.
///
/// NOTE: After the initial call for any given child storage, it is important that no keys further
/// keys are inserted. If so, then they may or may not be deleted by subsequent calls.
///
/// # Note
///
/// Please note that keys which are residing in the overlay for the child are deleted without
/// counting towards the `limit`.
pub fn clear_storage(
child_info: &ChildInfo,
maybe_limit: Option<u32>,
_maybe_cursor: Option<&[u8]>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WTF? Now our code using the new ::clear API is in the infinite loop, thanks a lot.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're writing this comment just to blow off some steam, this is not the right place.

What exactly is the code in your repo that is undergoing an infinite loop after this PR?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code like this:

let mut res = <MyMap<T>>::clear(16, None);
while let Some(cursor) = res.maybe_cursor {
    res = <MyMap<T>>::clear(16, Some(&cursor));
}

Where MyMap is:

#[pallet::storage]
pub type MyMap<T: Config> =
        StorageMap<_, Twox64Concat, T::AccountId, MyValue, OptionQuery>;

It is quite a serious issue with how one would use a cursor. Here, the issue is obvious, as the cursor is simply ignored, leading to a never-ending loop, since the code goes over the same data over and over again.

The workaround is to use a deprecated remove_all call. However, you'd think this new API would work.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that is right! Sorry that was an oversight by us. This currently happens because we don't have the new host function enabled. With this we wouldn't get this infinite loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's my understanding as well, and it is also understandable.

But it shouldn't probably have been integrated at this stage. It also doesn't help that the only working option remove_all at this time.

We're guilty of this too, but somehow this issue has passed through all the QA layers here and at our end - and we ended up with a bug.

I propose that, as a hotfix measure, and so that people don't start jumping to the new (and broken so far) API, we undeprecate remove_all, and hide the new clear implementations behind a feature flag (unstable?).

Copy link
Member Author

@gavofyork gavofyork Jul 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clear does not give the semantics needed for the code example above to work. This is why the documentation reads:

Attempt to remove all items from the map.

Emphasis added on purpose. clear attempts to make progress with each call, and once per block it certainly will with any sensible implementation. But it will not necessarily make progress on every call, and indeed, with current host functions only the first call per transaction may make any progress.

The API avoided giving guarantees so that downstream projects can write code which works regardless of the status of host functions.

A simpler (and non-broken) version of the code above is just to use <MyMap<T>>::clear(u32::max_value(), None);. This is not especially sensible since it essentially leaves the amount of work unlimited and might cause your block to be overweight and unvalidatable.

In general if you see an API designed to force a limit on the number of operations, then it might not be such a great idea to "workaround" the API designer's intention by placing it within an unbounded loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First of all thanks for the workaround of <MyMap<T>>::clear(u32::max_value()) - there's nothing in the code/docs of this fn that would hint me to try this (simply because in rust you don't have to resort by this - people tend to use Option to express this). It would be nice if there was still a call that would not take the limit argument, or for this call to take the limit as an Option. At the very least, add this "passing the u32::max_value() as the limit" bit to the docs.

Secondly, this is definitely going even more sideways. When I use the code that documents how the cursor behaves, I expect it to do what is documented.

In general if you see an API designed to force a limit on the number of operations, then it might not be such a great idea to "workaround" the API designer's intention by placing it within a loop.

The thing is, I am not trying to work around anything. I am using the API as documented. It is not my fault that it is broken. Look, I've tried to wrap my mind around this phrase to try to agree with it, like in a general case, but I don't think in the general case it makes any sense at all. And in the context - well, the cursor APIs are precisely designed to be used in a loop of some shape or form. How else would you use them?

What I'd like to see improved here is the documentation on the correctness. The documentation does not mention that this function can only work once per block (but it should've been). In fact, I don't think it intended to have this limitation - with a proper overlay API it can just work (but that's where you need new host functions, right?). And that's what I expected from this API. Cause why would it be worth working on otherwise? We already have remove_prefix that takes a limit, which will only work over multiple blocks, and doesn't have a cursor.

So, one question about the intent: the cursor, will it work if we pass it from one block to another? Obviously, it doesn't matter in the current implementation, but what's the intent here? It would be nice if it's also explained more thoroughly in the documentation.

This is pretty serious, and it seems like this issue has the potential to be shoved under the rug without a proper resolution, but without any real reason to do this. So, currently, the documentation doesn't mention anything about the requirement to run the code over multiple blocks (that would work properly since the removals would span across many instances of the storage overlays). The thing is, the cursor is unused in this implementation, so the documentation on it will not be true still. So, really, I don't know to make this API right without the proper support from the host end, so, I'd stick with the suggestion I gave in the last message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how this is broken.

If I were to write the code that you provided above, I would have checked to see whether or not the returned cursor is at the same position as it was before the clear operation.

When I see the word attempt, that is immediately what I think of, because it signals to me that the cursor may not advance after every operation. Regardless of whether or not we actually consume the cursor provided, checking that the cursor has in fact advanced is a sensible operation to perform after every clear call.

) -> ClearPrefixResult {
// TODO: Once the network has upgraded to include the new host functions, this code can be
// enabled.
// sp_io::default_child_storage::storage_kill(prefix, maybe_limit, maybe_cursor)
let r = match child_info.child_type() {
ChildType::ParentKeyId =>
sp_io::default_child_storage::storage_kill(child_info.storage_key(), limit),
sp_io::default_child_storage::storage_kill(child_info.storage_key(), maybe_limit),
};
use sp_io::{ClearPrefixResult::*, KillStorageResult::*};
use sp_io::KillStorageResult::*;
match r {
AllRemoved(db) => NoneLeft { db, total: db },
SomeRemaining(db) => SomeLeft { db, total: db },
AllRemoved(db) => ClearPrefixResult { maybe_cursor: None, db, total: db, loops: db },
SomeRemaining(db) => ClearPrefixResult { maybe_cursor: None, db, total: db, loops: db },
}
}

Expand Down
16 changes: 12 additions & 4 deletions frame/support/src/storage/generator/double_map.rs
Original file line number Diff line number Diff line change
Expand Up @@ -202,18 +202,26 @@ where
unhashed::kill(&Self::storage_double_map_final_key(k1, k2))
}

fn remove_prefix<KArg1>(k1: KArg1, limit: Option<u32>) -> sp_io::KillStorageResult
fn remove_prefix<KArg1>(k1: KArg1, maybe_limit: Option<u32>) -> sp_io::KillStorageResult
where
KArg1: EncodeLike<K1>,
{
Self::clear_prefix(k1, limit).into()
unhashed::clear_prefix(
Self::storage_double_map_final_key1(k1).as_ref(),
maybe_limit,
None,
).into()
}

fn clear_prefix<KArg1>(k1: KArg1, limit: Option<u32>) -> sp_io::ClearPrefixResult
fn clear_prefix<KArg1>(k1: KArg1, limit: u32, maybe_cursor: Option<&[u8]>) -> sp_io::ClearPrefixResult
where
KArg1: EncodeLike<K1>,
{
unhashed::clear_prefix(Self::storage_double_map_final_key1(k1).as_ref(), limit)
unhashed::clear_prefix(
Self::storage_double_map_final_key1(k1).as_ref(),
Some(limit),
maybe_cursor,
).into()
}

fn iter_prefix_values<KArg1>(k1: KArg1) -> storage::PrefixIterator<V>
Expand Down
6 changes: 3 additions & 3 deletions frame/support/src/storage/generator/nmap.rs
Original file line number Diff line number Diff line change
Expand Up @@ -183,14 +183,14 @@ where
where
K: HasKeyPrefix<KP>,
{
Self::clear_prefix(partial_key, limit).into()
unhashed::clear_prefix(&Self::storage_n_map_partial_key(partial_key), limit, None).into()
}

fn clear_prefix<KP>(partial_key: KP, limit: Option<u32>) -> sp_io::ClearPrefixResult
fn clear_prefix<KP>(partial_key: KP, limit: u32, maybe_cursor: Option<&[u8]>) -> sp_io::ClearPrefixResult
where
K: HasKeyPrefix<KP>,
{
unhashed::clear_prefix(&Self::storage_n_map_partial_key(partial_key), limit)
unhashed::clear_prefix(&Self::storage_n_map_partial_key(partial_key), Some(limit), maybe_cursor)
}

fn iter_prefix_values<KP>(partial_key: KP) -> PrefixIterator<V>
Expand Down
32 changes: 31 additions & 1 deletion frame/support/src/storage/migration.rs
Original file line number Diff line number Diff line change
Expand Up @@ -256,12 +256,42 @@ pub fn put_storage_value<T: Encode>(module: &[u8], item: &[u8], hash: &[u8], val

/// Remove all items under a storage prefix by the `module`, the map's `item` name and the key
/// `hash`.
#[deprecated = "Use `clear_storage_prefix` instead"]
pub fn remove_storage_prefix(module: &[u8], item: &[u8], hash: &[u8]) {
let mut key = vec![0u8; 32 + hash.len()];
let storage_prefix = storage_prefix(module, item);
key[0..32].copy_from_slice(&storage_prefix);
key[32..].copy_from_slice(hash);
frame_support::storage::unhashed::clear_prefix(&key, None);
let _ = frame_support::storage::unhashed::clear_prefix(&key, None, None);
}

/// Attmept to remove all values under a storage prefix by the `module`, the map's `item` name and
/// the key `hash`.
///
/// All values in the client overlay will be deleted, if `maybe_limit` is `Some` then up to
/// that number of values are deleted from the client backend, otherwise all values in the
/// client backend are deleted.
///
/// ## Cursors
///
/// The `maybe_cursor` parameter should be `None` for the first call to initial removal.
/// If the resultant `maybe_cursor` is `Some`, then another call is required to complete the
/// removal operation. This value must be passed in as the subsequent call's `maybe_cursor`
/// parameter. If the resultant `maybe_cursor` is `None`, then the operation is complete and no
/// items remain in storage provided that no items were added between the first calls and the
/// final call.
pub fn clear_storage_prefix(
module: &[u8],
item: &[u8],
hash: &[u8],
maybe_limit: Option<u32>,
maybe_cursor: Option<&[u8]>
) -> sp_io::ClearPrefixResult {
let mut key = vec![0u8; 32 + hash.len()];
let storage_prefix = storage_prefix(module, item);
key[0..32].copy_from_slice(&storage_prefix);
key[32..].copy_from_slice(hash);
frame_support::storage::unhashed::clear_prefix(&key, maybe_limit, maybe_cursor)
}

/// Take a particular item in storage by the `module`, the map's `item` name and the key `hash`.
Expand Down
104 changes: 69 additions & 35 deletions frame/support/src/storage/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -519,19 +519,26 @@ pub trait StorageDoubleMap<K1: FullEncode, K2: FullEncode, V: FullCodec> {
where
KArg1: ?Sized + EncodeLike<K1>;

/// Remove all values under the first key `k1` in the overlay and up to `limit` in the
/// Remove all values under the first key `k1` in the overlay and up to `maybe_limit` in the
/// backend.
///
/// All values in the client overlay will be deleted, if there is some `limit` then up to
/// `limit` values are deleted from the client backend, if `limit` is none then all values in
/// the client backend are deleted.
///
/// # Note
///
/// Calling this multiple times per block with a `limit` set leads always to the same keys being
/// removed and the same result being returned. This happens because the keys to delete in the
/// overlay are not taken into account when deleting keys in the backend.
fn clear_prefix<KArg1>(k1: KArg1, limit: Option<u32>) -> sp_io::ClearPrefixResult
/// All values in the client overlay will be deleted, if `maybe_limit` is `Some` then up to
/// that number of values are deleted from the client backend, otherwise all values in the
/// client backend are deleted.
///
/// ## Cursors
///
/// The `maybe_cursor` parameter should be `None` for the first call to initial removal.
/// If the resultant `maybe_cursor` is `Some`, then another call is required to complete the
/// removal operation. This value must be passed in as the subsequent call's `maybe_cursor`
/// parameter. If the resultant `maybe_cursor` is `None`, then the operation is complete and no
/// items remain in storage provided that no items were added between the first calls and the
/// final call.
fn clear_prefix<KArg1>(
k1: KArg1,
limit: u32,
maybe_cursor: Option<&[u8]>,
) -> sp_io::ClearPrefixResult
where
KArg1: ?Sized + EncodeLike<K1>;

Expand Down Expand Up @@ -677,19 +684,34 @@ pub trait StorageNMap<K: KeyGenerator, V: FullCodec> {
where
K: HasKeyPrefix<KP>;

/// Remove all values starting with `partial_key` in the overlay and up to `limit` in the
/// backend.
/// Attempt to remove items from the map matching a `partial_key` prefix.
///
/// All values in the client overlay will be deleted, if there is some `limit` then up to
/// `limit` values are deleted from the client backend, if `limit` is none then all values in
/// the client backend are deleted.
/// Returns [`ClearPrefixResult`] to inform about the result. Once the resultant `maybe_cursor`
/// field is `None`, then no further items remain to be deleted.
///
/// # Note
/// NOTE: After the initial call for any given map, it is important that no further items
/// are inserted into the map which match the `partial key`. If so, then the map may not be
/// empty when the resultant `maybe_cursor` is `None`.
///
/// Calling this multiple times per block with a `limit` set leads always to the same keys being
/// removed and the same result being returned. This happens because the keys to delete in the
/// overlay are not taken into account when deleting keys in the backend.
fn clear_prefix<KP>(partial_key: KP, limit: Option<u32>) -> sp_io::ClearPrefixResult
/// # Limit
///
/// A `limit` must be provided in order to cap the maximum
/// amount of deletions done in a single call. This is one fewer than the
/// maximum number of backend iterations which may be done by this operation and as such
/// represents the maximum number of backend deletions which may happen. A *limit* of zero
/// implies that no keys will be deleted, through there may be a single iteration done.
///
/// # Cursor
///
/// A *cursor* may be passed in to this operation with `maybe_cursor`. `None` should only be
/// passed once (in the initial call) for any given storage map and `partial_key`. Subsequent
/// calls operating on the same map/`partial_key` should always pass `Some`, and this should be
/// equal to the previous call result's `maybe_cursor` field.
fn clear_prefix<KP>(
partial_key: KP,
limit: u32,
maybe_cursor: Option<&[u8]>,
) -> sp_io::ClearPrefixResult
where
K: HasKeyPrefix<KP>;

Expand Down Expand Up @@ -1145,22 +1167,34 @@ pub trait StoragePrefixedMap<Value: FullCodec> {
/// overlay are not taken into account when deleting keys in the backend.
#[deprecated = "Use `clear` instead"]
fn remove_all(limit: Option<u32>) -> sp_io::KillStorageResult {
Self::clear(limit).into()
unhashed::clear_prefix(&Self::final_prefix(), limit, None).into()
}

/// Remove all values in the overlay and up to `limit` in the backend.
/// Attempt to remove all items from the map.
///
/// All values in the client overlay will be deleted, if there is some `limit` then up to
/// `limit` values are deleted from the client backend, if `limit` is none then all values in
/// the client backend are deleted.
/// Returns [`ClearPrefixResult`] to inform about the result. Once the resultant `maybe_cursor`
/// field is `None`, then no further items remain to be deleted.
///
/// # Note
/// NOTE: After the initial call for any given map, it is important that no further items
/// are inserted into the map. If so, then the map may not be empty when the resultant
/// `maybe_cursor` is `None`.
///
/// Calling this multiple times per block with a `limit` set leads always to the same keys being
/// removed and the same result being returned. This happens because the keys to delete in the
/// overlay are not taken into account when deleting keys in the backend.
fn clear(limit: Option<u32>) -> sp_io::ClearPrefixResult {
unhashed::clear_prefix(&Self::final_prefix(), limit)
/// # Limit
///
/// A `limit` must always be provided through in order to cap the maximum
/// amount of deletions done in a single call. This is one fewer than the
/// maximum number of backend iterations which may be done by this operation and as such
/// represents the maximum number of backend deletions which may happen. A *limit* of zero
/// implies that no keys will be deleted, through there may be a single iteration done.
///
/// # Cursor
///
/// A *cursor* may be passed in to this operation with `maybe_cursor`. `None` should only be
/// passed once (in the initial call) for any given storage map. Subsequent calls
/// operating on the same map should always pass `Some`, and this should be equal to the
/// previous call result's `maybe_cursor` field.
fn clear(limit: u32, maybe_cursor: Option<&[u8]>) -> sp_io::ClearPrefixResult {
unhashed::clear_prefix(&Self::final_prefix(), Some(limit), maybe_cursor)
}

/// Iter over all value of the storage.
Expand Down Expand Up @@ -1475,7 +1509,7 @@ mod test {
assert_eq!(MyStorage::iter_values().collect::<Vec<_>>(), vec![1, 2, 3, 4]);

// test removal
MyStorage::clear(None);
let _ = MyStorage::clear(u32::max_value(), None);
assert!(MyStorage::iter_values().collect::<Vec<_>>().is_empty());

// test migration
Expand All @@ -1485,7 +1519,7 @@ mod test {
assert!(MyStorage::iter_values().collect::<Vec<_>>().is_empty());
MyStorage::translate_values(|v: u32| Some(v as u64));
assert_eq!(MyStorage::iter_values().collect::<Vec<_>>(), vec![1, 2]);
MyStorage::clear(None);
let _ = MyStorage::clear(u32::max_value(), None);

// test migration 2
unhashed::put(&[&k[..], &vec![1][..]].concat(), &1u128);
Expand All @@ -1497,7 +1531,7 @@ mod test {
assert_eq!(MyStorage::iter_values().collect::<Vec<_>>(), vec![1, 2, 3]);
MyStorage::translate_values(|v: u128| Some(v as u64));
assert_eq!(MyStorage::iter_values().collect::<Vec<_>>(), vec![1, 2, 3]);
MyStorage::clear(None);
let _ = MyStorage::clear(u32::max_value(), None);

// test that other values are not modified.
assert_eq!(unhashed::get(&key_before[..]), Some(32u64));
Expand Down
Loading