-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-25440][SQL] Dumping query execution info to a file #22429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
19b9a68
90832f9
673ae56
fbde812
dca19d3
66351a0
2ee75bc
9b2a3e6
51c196e
c66a616
ed57c8e
ce2c086
37326e2
7abf14c
d1188e3
71ff7d1
ac94a86
f2906d9
d3fede1
c153838
6fe08bf
3324927
d63f862
24dbbba
deb5315
7fd88d3
732707a
3a133ae
7452b82
4ec5732
be16175
90ff7b5
2ba6624
1fcfc23
2bf11fc
5e2d3a6
bd331c5
3cf564b
2375064
8befa13
28795c7
a246db4
d4da29b
41b57bc
28cce2e
e4567cb
9b72104
9f1d11d
76f4248
f7de26d
bda6ac2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -23,6 +23,7 @@ import java.util.concurrent.atomic.AtomicBoolean | |
|
|
||
| import org.apache.spark.SparkEnv | ||
| import org.apache.spark.internal.Logging | ||
| import org.apache.spark.internal.config | ||
| import org.apache.spark.sql.catalyst.expressions._ | ||
| import org.apache.spark.sql.internal.SQLConf | ||
| import org.apache.spark.sql.types.{NumericType, StringType} | ||
|
|
@@ -174,15 +175,14 @@ package object util extends Logging { | |
| /** | ||
| * The performance overhead of creating and logging strings for wide schemas can be large. To | ||
| * limit the impact, we bound the number of fields to include by default. This can be overridden | ||
| * by setting the 'spark.debug.maxToStringFields' conf in SparkEnv. | ||
| * by setting the 'spark.debug.maxToStringFields' conf in SparkEnv or by settings the SQL config | ||
| * `spark.sql.debug.maxToStringFields`. | ||
| */ | ||
| val DEFAULT_MAX_TO_STRING_FIELDS = 25 | ||
|
|
||
| private[spark] def maxNumToStringFields = { | ||
| private[spark] def maxNumToStringFields: Int = { | ||
| val legacyLimit = if (SparkEnv.get != null) { | ||
|
||
| SparkEnv.get.conf.getInt("spark.debug.maxToStringFields", DEFAULT_MAX_TO_STRING_FIELDS) | ||
| SparkEnv.get.conf.get(config.MAX_TO_STRING_FIELDS) | ||
| } else { | ||
| DEFAULT_MAX_TO_STRING_FIELDS | ||
| config.MAX_TO_STRING_FIELDS.defaultValue.get | ||
| } | ||
| val sqlConfLimit = SQLConf.get.maxToStringFields | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a sequence like entry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how else I can describe all kind of classes where the parameter is applicable. If you have better words, you are welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am going to change it to
Maximum number of fields of a tree node that can be ...for the SQL config.