Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
239 changes: 0 additions & 239 deletions .github/workflows/master.yml

This file was deleted.

2 changes: 1 addition & 1 deletion R/pkg/DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Suggests:
testthat,
e1071,
survival,
arrow (>= 0.15.1)
arrow (>= 1.0.0)
Collate:
'schema.R'
'generics.R'
Expand Down
4 changes: 4 additions & 0 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@ environment:
# "(converted from warning) unable to identify current timezone 'C':" for an unknown reason.
# This environment variable works around to test SparkR against a higher version.
R_REMOTES_NO_ERRORS_FROM_WARNINGS: true
# AppVeyor does not have python3 yet which is used by default.
PYSPARK_PYTHON: python
# TODO(SPARK-32453): Remove SPARK_SCALA_VERSION environment and let load-spark-env scripts detect it.
SPARK_SCALA_VERSION: 2.12

test_script:
- cmd: .\bin\spark-submit2.cmd --driver-java-options "-Dlog4j.configuration=file:///%CD:\=/%/R/log4j.properties" --conf spark.hadoop.fs.defaultFS="file:///" R\pkg\tests\run-all.R
Expand Down
23 changes: 12 additions & 11 deletions bin/load-spark-env.cmd
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ rem spark-env.cmd is loaded from SPARK_CONF_DIR if set, or within the current di
rem conf\ subdirectory.

set SPARK_ENV_CMD=spark-env.cmd
if [%SPARK_ENV_LOADED%] == [] (
if not defined SPARK_ENV_LOADED (
set SPARK_ENV_LOADED=1

if [%SPARK_CONF_DIR%] == [] (
Expand All @@ -37,18 +37,19 @@ if [%SPARK_ENV_LOADED%] == [] (

rem Setting SPARK_SCALA_VERSION if not already set.

if [%SPARK_SCALA_VERSION%] == [] (
set SCALA_VERSION_1=2.13
set SCALA_VERSION_2=2.12
set SCALA_VERSION_1=2.13
set SCALA_VERSION_2=2.12

set ASSEMBLY_DIR1=%SPARK_HOME%\assembly\target\scala-%SCALA_VERSION_1%
set ASSEMBLY_DIR2=%SPARK_HOME%\assembly\target\scala-%SCALA_VERSION_2%
set ENV_VARIABLE_DOC=https://spark.apache.org/docs/latest/configuration.html#environment-variables
set ASSEMBLY_DIR1=%SPARK_HOME%\assembly\target\scala-%SCALA_VERSION_1%
set ASSEMBLY_DIR2=%SPARK_HOME%\assembly\target\scala-%SCALA_VERSION_2%
set ENV_VARIABLE_DOC=https://spark.apache.org/docs/latest/configuration.html#environment-variables

if not defined SPARK_SCALA_VERSION (
if exist %ASSEMBLY_DIR2% if exist %ASSEMBLY_DIR1% (
echo "Presence of build for multiple Scala versions detected (%ASSEMBLY_DIR1% and %ASSEMBLY_DIR2%)."
echo "Remove one of them or, set SPARK_SCALA_VERSION=%SCALA_VERSION_1% in %SPARK_ENV_CMD%."
echo "Visit %ENV_VARIABLE_DOC% for more details about setting environment variables in spark-env.cmd."
echo "Either clean one of them or, set SPARK_SCALA_VERSION in spark-env.cmd."
echo Presence of build for multiple Scala versions detected ^(%ASSEMBLY_DIR1% and %ASSEMBLY_DIR2%^).
echo Remove one of them or, set SPARK_SCALA_VERSION=%SCALA_VERSION_1% in spark-env.cmd.
echo Visit %ENV_VARIABLE_DOC% for more details about setting environment variables in spark-env.cmd.
echo Either clean one of them or, set SPARK_SCALA_VERSION in spark-env.cmd.
exit 1
)
if exist %ASSEMBLY_DIR1% (
Expand Down
2 changes: 1 addition & 1 deletion docs/sparkr.md
Original file line number Diff line number Diff line change
Expand Up @@ -674,7 +674,7 @@ Rscript -e 'install.packages("arrow", repos="https://cloud.r-project.org/")'
Please refer [the official documentation of Apache Arrow](https://arrow.apache.org/docs/r/) for more detials.

Note that you must ensure that Arrow R package is installed and available on all cluster nodes.
The current supported minimum version is 0.15.1; however, this might change between the minor releases since Arrow optimization in SparkR is experimental.
The current supported minimum version is 1.0.0; however, this might change between the minor releases since Arrow optimization in SparkR is experimental.

## Enabling for Conversion to/from R DataFrame, `dapply` and `gapply`

Expand Down