Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
pull latest apache spark #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
pull latest apache spark #10
Changes from 1 commit
be72b153ed1ae143d71d901a8e46d56c26241bb1dd3c0c2d03429619d58a350d6b30ed1662e936942aeeb8aaef443cbfadce0d3bb846bc6195713ebdc5d34d9e33954183d4cb2dd37d8d3abb36763b83e05af2def3676d67d16776574ef6cde7af291f24cee1813c4a628bdea51620e21a598d7522e1f83b66b1c07142cf0456b4024e6c1873e64f7c7270a432fa0b88fcbda96327ea56077e3e840ba05d7b1fcffba9cc83d0e174e47f48cd4895c98f0df6b734ed7ab30a11a4133c1b32d6d9d235d2836949a9cd8f4540313c6cacd5d0f34482ff2ecf437a964b50719b02409696580fc45c2c501370d7ce1108cba69ae96028e3f5e10a35cd8ea9648a8620bdbefe12f0d2412ab7f7aba9492900f14facb7fed07fd68a9f30d92ca594454e7a29e7f3c6ff02a438629744068c954d2fd60d4f9e451bc6a23254445f1794ee7dfe16c4c0364936c1f2e22ae36b48eeacdf45ffa0092baad2125ce7293ceea2b87b9ab791e00f1a1c26976f57bc1e9f62b20f6e37524dbb82410dfc1ec8a4f2288598d0331c74fe8a5eb506b45d7ee4d8f9af767905520d92a6273a71828fab04bab8f5dc74c0ee2ac2f1dd781677d0a3ef9575582515910e720c94fb9b54b1c5f9b891a98574371e4e2f6c5d8fdd88fa3a1d98c64fbf748ca00cc70fa5b7cb6ef1f521c44508c7e19a17edfec8be7e6bdcbb2298d8641f66cb72dc76153c4e6fc694b88393ddd7f5e054ddb2a28728a886617422b111e4decedf3c6198c79a4dabc7307ac0bad10d73d9067f4073026adf67d6308c657c726621e978b1894a7563b049abc66d64b94f7e0410f45b3a11db947c923511bc17a6581200a83194322f96242d5aefa8ee13f3e718bbc9c6ff59a280ff523d4dd142b6ff0ce17901dce6a71ebb9c06955d5fa7352bea51da5822a8d9ec81d1a09b1270e71f10cbf1b75bd17f7ad0db9cac249b21b806a8a5cd2f32a842fedf6965ac968559606868f130ad27fc53610e37f6d2b29323e6a714f180b65c11f24a3920af750ada2a04975a6c792aff4a8c9e22274d849d48bd04943ea5576c43f20adf9a365a29b2c5b9b1038b185ce20478d8e825e64fbd1cf21f6ceceaec931fdfe699c5935d74daf62d2b8b6312e740b09cbf3d9b98aef02bb06892449577bf4da83b117d6f20be4d01bd00dd74dee1d29d1e8323806e161ba7e0fa4dbea2516f4ecbe4169ed73944728640530fe68c6610a9File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
## What changes were proposed in this pull request? StructType.fromInternal is calling f.fromInternal(v) for every field. We can use precalculated information about type to limit the number of function calls. (its calculated once per StructType and used in per record calculations) Benchmarks (Python profiler) ``` df = spark.range(10000000).selectExpr("id as id0", "id as id1", "id as id2", "id as id3", "id as id4", "id as id5", "id as id6", "id as id7", "id as id8", "id as id9", "struct(id) as s").cache() df.count() df.rdd.map(lambda x: x).count() ``` Before ``` 310274584 function calls (300272456 primitive calls) in 1320.684 seconds Ordered by: internal time, cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 10000000 253.417 0.000 486.991 0.000 types.py:619(<listcomp>) 30000000 192.272 0.000 1009.986 0.000 types.py:612(fromInternal) 100000000 176.140 0.000 176.140 0.000 types.py:88(fromInternal) 20000000 156.832 0.000 328.093 0.000 types.py:1471(_create_row) 14000 107.206 0.008 1237.917 0.088 {built-in method loads} 20000000 80.176 0.000 1090.162 0.000 types.py:1468(<lambda>) ``` After ``` 210274584 function calls (200272456 primitive calls) in 1035.974 seconds Ordered by: internal time, cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 30000000 215.845 0.000 698.748 0.000 types.py:612(fromInternal) 20000000 165.042 0.000 351.572 0.000 types.py:1471(_create_row) 14000 116.834 0.008 946.791 0.068 {built-in method loads} 20000000 87.326 0.000 786.073 0.000 types.py:1468(<lambda>) 20000000 85.477 0.000 134.607 0.000 types.py:1519(__new__) 10000000 65.777 0.000 126.712 0.000 types.py:619(<listcomp>) ``` Main difference is types.py:619(<listcomp>) and types.py:88(fromInternal) (which is removed in After) The number of function calls is 100 million less. And performance is 20% better. Benchmark (worst case scenario.) Test ``` df = spark.range(1000000).selectExpr("current_timestamp as id0", "current_timestamp as id1", "current_timestamp as id2", "current_timestamp as id3", "current_timestamp as id4", "current_timestamp as id5", "current_timestamp as id6", "current_timestamp as id7", "current_timestamp as id8", "current_timestamp as id9").cache() df.count() df.rdd.map(lambda x: x).count() ``` Before ``` 31166064 function calls (31163984 primitive calls) in 150.882 seconds ``` After ``` 31166064 function calls (31163984 primitive calls) in 153.220 seconds ``` IMPORTANT: The benchmark was done on top of apache#19246. Without apache#19246 the performance improvement will be even greater. ## How was this patch tested? Existing tests. Performance benchmark. Author: Maciej Bryński <[email protected]> Closes apache#19249 from maver1ck/spark_22032.Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing