Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
User defined datatypes #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
User defined datatypes #13
Changes from 1 commit
39ccaba49bbdcb46db2772ac40da56102dcd3cdf9173da9c2e10d71e371321ce6e37709d9ca919eb49d44d26aca186b4979b6de6f7ced88b24b818b56096db7b4f39f66af8e218ab6bd293a0b5044583a4c589ca091d32c99e416b2fe0ba97f7b50ebe2ec4a642b246e7f4ea856fd34adedace8e678b9fc351862803e7f0adcb7d323f6171477c648f406a8305db2da7e63bb4d1966f3c7aeecd51afde9ea054e14afe9a4eadc4c51b3ce61e9c1afa364d52bfce1d417586e2e0fe1c09342b57d5a8f64f85708162aeb84bc262cd561ca7741a623b25fdaf52814a9cd856b0816bb56fabae4ca3f05e09b97cf19f813effc137d942c5882c6d6a3025f7997006b48522293672c222fa4783b7a1ce595c8da29c9bd0aea228809c785d2987e8d60a9d40e886617c89a8f6a40a767aacb7b30ea286098f83cb563987f80dcf207e439b3a906c66c98c29898b22a3a845d39530316e41786c2e52e4fc683444df7974bb759540bf589fc677852c0af7e51879a1652838bf889e8a5ddc51f4dd518bc205308420481aaa974d7b2ace41e83a9d66cf4e8c286377ada9aa340ac9e05cadea302d1d7bcc8bfa614b7e3a1ad418ad83698a7ea89af6df27470d30c34fa57c0c26c4ceb04846c6341fae095b47346cde8813be0ac52e37768a8044d8b451ea3e3d247c5294d52cec2f254da6c1b9815807cb447a40f6abcafcf4b5548284e5da81536d70b5e79bf8c0bfd0155949551ce997dff015535354671df05a4e7fd8048d59b371234258cd739bd6db3157c7ad085d932719234de92fb1fbca9142c9b24c512926f092d5231a3f9334d69849b43ed3450572f5454368cb69d9b6ebe32e35e2426d31d10734d09872fc66ad3bd0dd31517a58a6077acd4ac7adb64157c41d13fa712b3ea465af23468e7a68ecf3f1e736155ab777087e31a23f73f562d01d2e07fb6a860219598c556e680fd87f4e0b28ee29ef3713671959e626c1d4f355f55218aad0fde17894de2d8176b156f2c6123f966f105c5a30eaeb8119b2f60982c03553de70f8bebf24273ac9639f870704303c950f9726893ee4c964b32efea04afb226b9e35790352f40c02e1f7b9c34a5831cd60cb4dff99d685872f6f02503551e528263626a4759af7adb16139cfbc3213143ac387264a54500d8ab0286757f296568b242ea8de957cfa86b2020630bc6fddc1ca571bb6d06338030ce5b25817b2be13cd8aFile filter
Filter by extension
Conversations
Uh oh!
There was an error while loading. Please reload this page.
Jump to
Uh oh!
There was an error while loading. Please reload this page.
SparkSql crashes on selecting tables using custom serde. Example: ---------------- CREATE EXTERNAL TABLE table_name PARTITIONED BY ( a int) ROW FORMAT 'SERDE "org.apache.hadoop.hive.serde2.thrift.ThriftDeserializer" with serdeproperties("serialization.format"="org.apache.thrift.protocol.TBinaryProtocol","serialization.class"="ser_class") STORED AS SEQUENCEFILE; The following exception is seen on running a query like 'select * from table_name limit 1': ERROR CliDriver: org.apache.hadoop.hive.serde2.SerDeException: java.lang.NullPointerException at org.apache.hadoop.hive.serde2.thrift.ThriftDeserializer.initialize(ThriftDeserializer.java:68) at org.apache.hadoop.hive.ql.plan.TableDesc.getDeserializer(TableDesc.java:80) at org.apache.spark.sql.hive.execution.HiveTableScan.addColumnMetadataToConf(HiveTableScan.scala:86) at org.apache.spark.sql.hive.execution.HiveTableScan.<init>(HiveTableScan.scala:100) at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188) at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188) at org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:364) at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:184) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59) at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54) at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:280) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59) at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:402) at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:400) at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:406) at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:406) at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:406) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.NullPointerException Author: chirag <[email protected]> Closes apache#2674 from chiragaggarwal/branch-1.1 and squashes the following commits: 370c31b [chirag] SPARK-3807: Add a test case to validate the fix. 1f26805 [chirag] SPARK-3807: SparkSql does not work for tables created using custom serde (Incorporated Review Comments) ba4bc0c [chirag] SPARK-3807: SparkSql does not work for tables created using custom serde 5c73b72 [chirag] SPARK-3807: SparkSql does not work for tables created using custom serde (cherry picked from commit 925e22d) Signed-off-by: Michael Armbrust <[email protected]>Uh oh!
There was an error while loading. Please reload this page.
There are no files selected for viewing