Skip to content

Commit 1426a08

Browse files
lianchengyhuai
authored andcommitted
[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update
## What changes were proposed in this pull request? This PR moves one and the last hard-coded Scala example snippet from the SQL programming guide into `SparkSqlExample.scala`. It also renames all Scala/Java example files so that all "Sql" in the file names are updated to "SQL". ## How was this patch tested? Manually verified the generated HTML page. Author: Cheng Lian <lian@databricks.com> Closes apache#14245 from liancheng/minor-scala-example-update.
1 parent e5fbb18 commit 1426a08

File tree

5 files changed

+35
-36
lines changed

5 files changed

+35
-36
lines changed

docs/sql-programming-guide.md

Lines changed: 28 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -65,14 +65,14 @@ Throughout this document, we will often refer to Scala/Java Datasets of `Row`s a
6565

6666
The entry point into all functionality in Spark is the [`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder()`:
6767

68-
{% include_example init_session scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
68+
{% include_example init_session scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
6969
</div>
7070

7171
<div data-lang="java" markdown="1">
7272

7373
The entry point into all functionality in Spark is the [`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder()`:
7474

75-
{% include_example init_session java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
75+
{% include_example init_session java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
7676
</div>
7777

7878
<div data-lang="python" markdown="1">
@@ -105,7 +105,7 @@ from a Hive table, or from [Spark data sources](#data-sources).
105105

106106
As an example, the following creates a DataFrame based on the content of a JSON file:
107107

108-
{% include_example create_df scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
108+
{% include_example create_df scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
109109
</div>
110110

111111
<div data-lang="java" markdown="1">
@@ -114,7 +114,7 @@ from a Hive table, or from [Spark data sources](#data-sources).
114114

115115
As an example, the following creates a DataFrame based on the content of a JSON file:
116116

117-
{% include_example create_df java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
117+
{% include_example create_df java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
118118
</div>
119119

120120
<div data-lang="python" markdown="1">
@@ -155,7 +155,7 @@ Here we include some basic examples of structured data processing using Datasets
155155

156156
<div class="codetabs">
157157
<div data-lang="scala" markdown="1">
158-
{% include_example untyped_ops scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
158+
{% include_example untyped_ops scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
159159

160160
For a complete list of the types of operations that can be performed on a Dataset refer to the [API Documentation](api/scala/index.html#org.apache.spark.sql.Dataset).
161161

@@ -164,7 +164,7 @@ In addition to simple column references and expressions, Datasets also have a ri
164164

165165
<div data-lang="java" markdown="1">
166166

167-
{% include_example untyped_ops java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
167+
{% include_example untyped_ops java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
168168

169169
For a complete list of the types of operations that can be performed on a Dataset refer to the [API Documentation](api/java/org/apache/spark/sql/Dataset.html).
170170

@@ -249,13 +249,13 @@ In addition to simple column references and expressions, DataFrames also have a
249249
<div data-lang="scala" markdown="1">
250250
The `sql` function on a `SparkSession` enables applications to run SQL queries programmatically and returns the result as a `DataFrame`.
251251

252-
{% include_example run_sql scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
252+
{% include_example run_sql scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
253253
</div>
254254

255255
<div data-lang="java" markdown="1">
256256
The `sql` function on a `SparkSession` enables applications to run SQL queries programmatically and returns the result as a `Dataset<Row>`.
257257

258-
{% include_example run_sql java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
258+
{% include_example run_sql java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
259259
</div>
260260

261261
<div data-lang="python" markdown="1">
@@ -287,11 +287,11 @@ the bytes back into an object.
287287

288288
<div class="codetabs">
289289
<div data-lang="scala" markdown="1">
290-
{% include_example create_ds scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
290+
{% include_example create_ds scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
291291
</div>
292292

293293
<div data-lang="java" markdown="1">
294-
{% include_example create_ds java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
294+
{% include_example create_ds java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
295295
</div>
296296
</div>
297297

@@ -318,7 +318,7 @@ reflection and become the names of the columns. Case classes can also be nested
318318
types such as `Seq`s or `Array`s. This RDD can be implicitly converted to a DataFrame and then be
319319
registered as a table. Tables can be used in subsequent SQL statements.
320320

321-
{% include_example schema_inferring scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
321+
{% include_example schema_inferring scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
322322
</div>
323323

324324
<div data-lang="java" markdown="1">
@@ -330,7 +330,7 @@ does not support JavaBeans that contain `Map` field(s). Nested JavaBeans and `Li
330330
fields are supported though. You can create a JavaBean by creating a class that implements
331331
Serializable and has getters and setters for all of its fields.
332332

333-
{% include_example schema_inferring java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
333+
{% include_example schema_inferring java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
334334
</div>
335335

336336
<div data-lang="python" markdown="1">
@@ -385,7 +385,7 @@ by `SparkSession`.
385385

386386
For example:
387387

388-
{% include_example programmatic_schema scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
388+
{% include_example programmatic_schema scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
389389
</div>
390390

391391
<div data-lang="java" markdown="1">
@@ -403,7 +403,7 @@ by `SparkSession`.
403403

404404
For example:
405405

406-
{% include_example programmatic_schema java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
406+
{% include_example programmatic_schema java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
407407
</div>
408408

409409
<div data-lang="python" markdown="1">
@@ -472,11 +472,11 @@ In the simplest form, the default data source (`parquet` unless otherwise config
472472

473473
<div class="codetabs">
474474
<div data-lang="scala" markdown="1">
475-
{% include_example generic_load_save_functions scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala %}
475+
{% include_example generic_load_save_functions scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
476476
</div>
477477

478478
<div data-lang="java" markdown="1">
479-
{% include_example generic_load_save_functions java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java %}
479+
{% include_example generic_load_save_functions java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
480480
</div>
481481

482482
<div data-lang="python" markdown="1">
@@ -507,11 +507,11 @@ using this syntax.
507507

508508
<div class="codetabs">
509509
<div data-lang="scala" markdown="1">
510-
{% include_example manual_load_options scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala %}
510+
{% include_example manual_load_options scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
511511
</div>
512512

513513
<div data-lang="java" markdown="1">
514-
{% include_example manual_load_options java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java %}
514+
{% include_example manual_load_options java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
515515
</div>
516516

517517
<div data-lang="python" markdown="1">
@@ -538,11 +538,11 @@ file directly with SQL.
538538

539539
<div class="codetabs">
540540
<div data-lang="scala" markdown="1">
541-
{% include_example direct_sql scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala %}
541+
{% include_example direct_sql scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
542542
</div>
543543

544544
<div data-lang="java" markdown="1">
545-
{% include_example direct_sql java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java %}
545+
{% include_example direct_sql java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
546546
</div>
547547

548548
<div data-lang="python" markdown="1">
@@ -633,11 +633,11 @@ Using the data from the above example:
633633
<div class="codetabs">
634634

635635
<div data-lang="scala" markdown="1">
636-
{% include_example basic_parquet_example scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala %}
636+
{% include_example basic_parquet_example scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
637637
</div>
638638

639639
<div data-lang="java" markdown="1">
640-
{% include_example basic_parquet_example java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java %}
640+
{% include_example basic_parquet_example java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
641641
</div>
642642

643643
<div data-lang="python" markdown="1">
@@ -766,11 +766,11 @@ turned it off by default starting from 1.5.0. You may enable it by
766766
<div class="codetabs">
767767

768768
<div data-lang="scala" markdown="1">
769-
{% include_example schema_merging scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala %}
769+
{% include_example schema_merging scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
770770
</div>
771771

772772
<div data-lang="java" markdown="1">
773-
{% include_example schema_merging java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java %}
773+
{% include_example schema_merging java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
774774
</div>
775775

776776
<div data-lang="python" markdown="1">
@@ -973,7 +973,7 @@ Note that the file that is offered as _a json file_ is not a typical JSON file.
973973
line must contain a separate, self-contained valid JSON object. As a consequence,
974974
a regular multi-line JSON file will most often fail.
975975

976-
{% include_example json_dataset scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala %}
976+
{% include_example json_dataset scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
977977
</div>
978978

979979
<div data-lang="java" markdown="1">
@@ -985,7 +985,7 @@ Note that the file that is offered as _a json file_ is not a typical JSON file.
985985
line must contain a separate, self-contained valid JSON object. As a consequence,
986986
a regular multi-line JSON file will most often fail.
987987

988-
{% include_example json_dataset java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java %}
988+
{% include_example json_dataset java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
989989
</div>
990990

991991
<div data-lang="python" markdown="1">
@@ -1879,9 +1879,8 @@ Spark SQL and DataFrames support the following data types:
18791879

18801880
All data types of Spark SQL are located in the package `org.apache.spark.sql.types`.
18811881
You can access them by doing
1882-
{% highlight scala %}
1883-
import org.apache.spark.sql.types._
1884-
{% endhighlight %}
1882+
1883+
{% include_example data_types scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
18851884

18861885
<table class="table">
18871886
<tr>

examples/src/main/java/org/apache/spark/examples/sql/JavaSqlDataSourceExample.java renamed to examples/src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
// $example off:basic_parquet_example$
3636
import org.apache.spark.sql.SparkSession;
3737

38-
public class JavaSqlDataSourceExample {
38+
public class JavaSQLDataSourceExample {
3939

4040
// $example on:schema_merging$
4141
public static class Square implements Serializable {

examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSqlExample.java renamed to examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQLExample.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
import static org.apache.spark.sql.functions.col;
6161
// $example off:untyped_ops$
6262

63-
public class JavaSparkSqlExample {
63+
public class JavaSparkSQLExample {
6464
// $example on:create_ds$
6565
public static class Person implements Serializable {
6666
private String name;

examples/src/main/scala/org/apache/spark/examples/sql/SqlDataSourceExample.scala renamed to examples/src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ package org.apache.spark.examples.sql
1818

1919
import org.apache.spark.sql.SparkSession
2020

21-
object SqlDataSourceExample {
21+
object SQLDataSourceExample {
2222

2323
case class Person(name: String, age: Long)
2424

examples/src/main/scala/org/apache/spark/examples/sql/SparkSqlExample.scala renamed to examples/src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,12 +25,12 @@ import org.apache.spark.sql.Row
2525
import org.apache.spark.sql.SparkSession
2626
// $example off:init_session$
2727
// $example on:programmatic_schema$
28-
import org.apache.spark.sql.types.StringType
29-
import org.apache.spark.sql.types.StructField
30-
import org.apache.spark.sql.types.StructType
28+
// $example on:data_types$
29+
import org.apache.spark.sql.types._
30+
// $example off:data_types$
3131
// $example off:programmatic_schema$
3232

33-
object SparkSqlExample {
33+
object SparkSQLExample {
3434

3535
// $example on:create_ds$
3636
// Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit,

0 commit comments

Comments
 (0)