Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Fix.
  • Loading branch information
tomusdrw committed Oct 21, 2019
commit 32b559ee8cfb49dcf8468a8ba2301c90b2597ee5
17 changes: 16 additions & 1 deletion primitive-types/impls/serde/src/serialize.rs
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ pub fn deserialize<'de, D>(deserializer: D) -> Result<Vec<u8>, D::Error> where

let bytes_len = v.len() - 2;
let mut modulus = bytes_len % 2;
let mut bytes = vec![0u8; bytes_len / 2 + 1];
let mut bytes = vec![0u8; (bytes_len + 1) / 2];
Copy link
Contributor

@niklasad1 niklasad1 Oct 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why bother with this complex logic?
Isn't better to use Vec::new() / Vec::with_capacity() and Vec::push?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it's a performance optimization (hopefuly measureable), which is justified, since this code might be on a hot path.
Also this logic is not that complex imho, seems it was just untested properly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I for one kind of agree with Niklas here: it's a bit odd and I'm surprised to learn it's faster to init a vec with vec![0u8; 123] than Vec::with_capacity(123).

(No action needed here, I'm just curious)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm about to add benches for Bytes to be continued :P

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, thanks for making me realize this is actually a vec. I was pretty sure that we just allocate an array here (but obviously we don't cause it's not a constant size)
I'm really curious about the performance indeed, as just using push instead of direct access seems way less error prone, and with_capacity should ensure that there are no re-allocations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really hope there's no difference in performance! :)

let mut buf = 0;
let mut pos = 0;
for (idx, byte) in v.bytes().enumerate().skip(2) {
Expand Down Expand Up @@ -229,15 +229,30 @@ mod tests {

#[test]
fn should_not_fail_on_short_string() {
let a: Bytes = serde_json::from_str("\"0x\"").unwrap();
let b: Bytes = serde_json::from_str("\"0x1\"").unwrap();
let c: Bytes = serde_json::from_str("\"0x12\"").unwrap();
let d: Bytes = serde_json::from_str("\"0x123\"").unwrap();
let e: Bytes = serde_json::from_str("\"0x1234\"").unwrap();
let f: Bytes = serde_json::from_str("\"0x12345\"").unwrap();

assert!(a.0.is_empty());
assert_eq!(b.0, vec![1]);
assert_eq!(c.0, vec![0x12]);
assert_eq!(d.0, vec![0x1, 0x23]);
assert_eq!(e.0, vec![0x12, 0x34]);
assert_eq!(f.0, vec![0x1, 0x23, 0x45]);
}


#[test]
fn should_not_fail_on_other_strings() {
let a: Bytes = serde_json::from_str("\"0x7f864e18e3dd8b58386310d2fe0919eef27c6e558564b7f67f22d99d20f587\"").unwrap();
let b: Bytes = serde_json::from_str("\"0x7f864e18e3dd8b58386310d2fe0919eef27c6e558564b7f67f22d99d20f587b\"").unwrap();
let c: Bytes = serde_json::from_str("\"0x7f864e18e3dd8b58386310d2fe0919eef27c6e558564b7f67f22d99d20f587b4\"").unwrap();

assert_eq!(a.0.len(), 31);
assert_eq!(b.0.len(), 32);
assert_eq!(c.0.len(), 32);
}
}