@@ -65,14 +65,14 @@ Throughout this document, we will often refer to Scala/Java Datasets of `Row`s a
6565
6666The entry point into all functionality in Spark is the [ ` SparkSession ` ] ( api/scala/index.html#org.apache.spark.sql.SparkSession ) class. To create a basic ` SparkSession ` , just use ` SparkSession.builder() ` :
6767
68- {% include_example init_session scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
68+ {% include_example init_session scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
6969</div >
7070
7171<div data-lang =" java " markdown =" 1 " >
7272
7373The entry point into all functionality in Spark is the [ ` SparkSession ` ] ( api/java/index.html#org.apache.spark.sql.SparkSession ) class. To create a basic ` SparkSession ` , just use ` SparkSession.builder() ` :
7474
75- {% include_example init_session java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
75+ {% include_example init_session java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
7676</div >
7777
7878<div data-lang =" python " markdown =" 1 " >
@@ -105,7 +105,7 @@ from a Hive table, or from [Spark data sources](#data-sources).
105105
106106As an example, the following creates a DataFrame based on the content of a JSON file:
107107
108- {% include_example create_df scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
108+ {% include_example create_df scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
109109</div >
110110
111111<div data-lang =" java " markdown =" 1 " >
@@ -114,7 +114,7 @@ from a Hive table, or from [Spark data sources](#data-sources).
114114
115115As an example, the following creates a DataFrame based on the content of a JSON file:
116116
117- {% include_example create_df java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
117+ {% include_example create_df java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
118118</div >
119119
120120<div data-lang =" python " markdown =" 1 " >
@@ -155,7 +155,7 @@ Here we include some basic examples of structured data processing using Datasets
155155
156156<div class =" codetabs " >
157157<div data-lang =" scala " markdown =" 1 " >
158- {% include_example untyped_ops scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
158+ {% include_example untyped_ops scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
159159
160160For a complete list of the types of operations that can be performed on a Dataset refer to the [ API Documentation] ( api/scala/index.html#org.apache.spark.sql.Dataset ) .
161161
@@ -164,7 +164,7 @@ In addition to simple column references and expressions, Datasets also have a ri
164164
165165<div data-lang =" java " markdown =" 1 " >
166166
167- {% include_example untyped_ops java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
167+ {% include_example untyped_ops java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
168168
169169For a complete list of the types of operations that can be performed on a Dataset refer to the [ API Documentation] ( api/java/org/apache/spark/sql/Dataset.html ) .
170170
@@ -249,13 +249,13 @@ In addition to simple column references and expressions, DataFrames also have a
249249<div data-lang =" scala " markdown =" 1 " >
250250The ` sql ` function on a ` SparkSession ` enables applications to run SQL queries programmatically and returns the result as a ` DataFrame ` .
251251
252- {% include_example run_sql scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
252+ {% include_example run_sql scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
253253</div >
254254
255255<div data-lang =" java " markdown =" 1 " >
256256The ` sql ` function on a ` SparkSession ` enables applications to run SQL queries programmatically and returns the result as a ` Dataset<Row> ` .
257257
258- {% include_example run_sql java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
258+ {% include_example run_sql java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
259259</div >
260260
261261<div data-lang =" python " markdown =" 1 " >
@@ -287,11 +287,11 @@ the bytes back into an object.
287287
288288<div class =" codetabs " >
289289<div data-lang =" scala " markdown =" 1 " >
290- {% include_example create_ds scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
290+ {% include_example create_ds scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
291291</div >
292292
293293<div data-lang =" java " markdown =" 1 " >
294- {% include_example create_ds java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
294+ {% include_example create_ds java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
295295</div >
296296</div >
297297
@@ -318,7 +318,7 @@ reflection and become the names of the columns. Case classes can also be nested
318318types such as ` Seq ` s or ` Array ` s. This RDD can be implicitly converted to a DataFrame and then be
319319registered as a table. Tables can be used in subsequent SQL statements.
320320
321- {% include_example schema_inferring scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
321+ {% include_example schema_inferring scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
322322</div >
323323
324324<div data-lang =" java " markdown =" 1 " >
@@ -330,7 +330,7 @@ does not support JavaBeans that contain `Map` field(s). Nested JavaBeans and `Li
330330fields are supported though. You can create a JavaBean by creating a class that implements
331331Serializable and has getters and setters for all of its fields.
332332
333- {% include_example schema_inferring java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
333+ {% include_example schema_inferring java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
334334</div >
335335
336336<div data-lang =" python " markdown =" 1 " >
@@ -385,7 +385,7 @@ by `SparkSession`.
385385
386386For example:
387387
388- {% include_example programmatic_schema scala/org/apache/spark/examples/sql/SparkSqlExample .scala %}
388+ {% include_example programmatic_schema scala/org/apache/spark/examples/sql/SparkSQLExample .scala %}
389389</div >
390390
391391<div data-lang =" java " markdown =" 1 " >
@@ -403,7 +403,7 @@ by `SparkSession`.
403403
404404For example:
405405
406- {% include_example programmatic_schema java/org/apache/spark/examples/sql/JavaSparkSqlExample .java %}
406+ {% include_example programmatic_schema java/org/apache/spark/examples/sql/JavaSparkSQLExample .java %}
407407</div >
408408
409409<div data-lang =" python " markdown =" 1 " >
@@ -472,11 +472,11 @@ In the simplest form, the default data source (`parquet` unless otherwise config
472472
473473<div class =" codetabs " >
474474<div data-lang =" scala " markdown =" 1 " >
475- {% include_example generic_load_save_functions scala/org/apache/spark/examples/sql/SqlDataSourceExample .scala %}
475+ {% include_example generic_load_save_functions scala/org/apache/spark/examples/sql/SQLDataSourceExample .scala %}
476476</div >
477477
478478<div data-lang =" java " markdown =" 1 " >
479- {% include_example generic_load_save_functions java/org/apache/spark/examples/sql/JavaSqlDataSourceExample .java %}
479+ {% include_example generic_load_save_functions java/org/apache/spark/examples/sql/JavaSQLDataSourceExample .java %}
480480</div >
481481
482482<div data-lang =" python " markdown =" 1 " >
@@ -507,11 +507,11 @@ using this syntax.
507507
508508<div class =" codetabs " >
509509<div data-lang =" scala " markdown =" 1 " >
510- {% include_example manual_load_options scala/org/apache/spark/examples/sql/SqlDataSourceExample .scala %}
510+ {% include_example manual_load_options scala/org/apache/spark/examples/sql/SQLDataSourceExample .scala %}
511511</div >
512512
513513<div data-lang =" java " markdown =" 1 " >
514- {% include_example manual_load_options java/org/apache/spark/examples/sql/JavaSqlDataSourceExample .java %}
514+ {% include_example manual_load_options java/org/apache/spark/examples/sql/JavaSQLDataSourceExample .java %}
515515</div >
516516
517517<div data-lang =" python " markdown =" 1 " >
@@ -538,11 +538,11 @@ file directly with SQL.
538538
539539<div class =" codetabs " >
540540<div data-lang =" scala " markdown =" 1 " >
541- {% include_example direct_sql scala/org/apache/spark/examples/sql/SqlDataSourceExample .scala %}
541+ {% include_example direct_sql scala/org/apache/spark/examples/sql/SQLDataSourceExample .scala %}
542542</div >
543543
544544<div data-lang =" java " markdown =" 1 " >
545- {% include_example direct_sql java/org/apache/spark/examples/sql/JavaSqlDataSourceExample .java %}
545+ {% include_example direct_sql java/org/apache/spark/examples/sql/JavaSQLDataSourceExample .java %}
546546</div >
547547
548548<div data-lang =" python " markdown =" 1 " >
@@ -633,11 +633,11 @@ Using the data from the above example:
633633<div class =" codetabs " >
634634
635635<div data-lang =" scala " markdown =" 1 " >
636- {% include_example basic_parquet_example scala/org/apache/spark/examples/sql/SqlDataSourceExample .scala %}
636+ {% include_example basic_parquet_example scala/org/apache/spark/examples/sql/SQLDataSourceExample .scala %}
637637</div >
638638
639639<div data-lang =" java " markdown =" 1 " >
640- {% include_example basic_parquet_example java/org/apache/spark/examples/sql/JavaSqlDataSourceExample .java %}
640+ {% include_example basic_parquet_example java/org/apache/spark/examples/sql/JavaSQLDataSourceExample .java %}
641641</div >
642642
643643<div data-lang =" python " markdown =" 1 " >
@@ -766,11 +766,11 @@ turned it off by default starting from 1.5.0. You may enable it by
766766<div class =" codetabs " >
767767
768768<div data-lang =" scala " markdown =" 1 " >
769- {% include_example schema_merging scala/org/apache/spark/examples/sql/SqlDataSourceExample .scala %}
769+ {% include_example schema_merging scala/org/apache/spark/examples/sql/SQLDataSourceExample .scala %}
770770</div >
771771
772772<div data-lang =" java " markdown =" 1 " >
773- {% include_example schema_merging java/org/apache/spark/examples/sql/JavaSqlDataSourceExample .java %}
773+ {% include_example schema_merging java/org/apache/spark/examples/sql/JavaSQLDataSourceExample .java %}
774774</div >
775775
776776<div data-lang =" python " markdown =" 1 " >
@@ -973,7 +973,7 @@ Note that the file that is offered as _a json file_ is not a typical JSON file.
973973line must contain a separate, self-contained valid JSON object. As a consequence,
974974a regular multi-line JSON file will most often fail.
975975
976- {% include_example json_dataset scala/org/apache/spark/examples/sql/SqlDataSourceExample .scala %}
976+ {% include_example json_dataset scala/org/apache/spark/examples/sql/SQLDataSourceExample .scala %}
977977</div >
978978
979979<div data-lang =" java " markdown =" 1 " >
@@ -985,7 +985,7 @@ Note that the file that is offered as _a json file_ is not a typical JSON file.
985985line must contain a separate, self-contained valid JSON object. As a consequence,
986986a regular multi-line JSON file will most often fail.
987987
988- {% include_example json_dataset java/org/apache/spark/examples/sql/JavaSqlDataSourceExample .java %}
988+ {% include_example json_dataset java/org/apache/spark/examples/sql/JavaSQLDataSourceExample .java %}
989989</div >
990990
991991<div data-lang =" python " markdown =" 1 " >
@@ -1879,9 +1879,8 @@ Spark SQL and DataFrames support the following data types:
18791879
18801880All data types of Spark SQL are located in the package ` org.apache.spark.sql.types ` .
18811881You can access them by doing
1882- {% highlight scala %}
1883- import org.apache.spark.sql.types._
1884- {% endhighlight %}
1882+
1883+ {% include_example data_types scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
18851884
18861885<table class =" table " >
18871886<tr >
0 commit comments