Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
82e2f09
Fix part of undocumented/duplicated arguments warnings by CRAN-check
junyangq Aug 9, 2016
41d9dca
[SPARK-16950] [PYSPARK] fromOffsets parameter support in KafkaUtils.c…
Aug 9, 2016
44115e9
[SPARK-16956] Make ApplicationState.MAX_NUM_RETRY configurable
JoshRosen Aug 9, 2016
2d136db
[SPARK-16905] SQL DDL: MSCK REPAIR TABLE
Aug 9, 2016
901edbb
More fixes of the docs.
junyangq Aug 10, 2016
475ee38
Fixed typo
jupblb Aug 10, 2016
2285de7
[SPARK-16522][MESOS] Spark application throws exception on exit.
sun-rui Aug 10, 2016
20efb79
[SPARK-16324][SQL] regexp_extract should doc that it returns empty st…
srowen Aug 10, 2016
719ac5f
[SPARK-15899][SQL] Fix the construction of the file path with hadoop …
avulanov Aug 10, 2016
15637f7
Revert "[SPARK-15899][SQL] Fix the construction of the file path with…
srowen Aug 10, 2016
977fbbf
[SPARK-15639] [SPARK-16321] [SQL] Push down filter at RowGroups level…
viirya Aug 10, 2016
d3a30d2
[SPARK-16579][SPARKR] add install.spark function
junyangq Aug 10, 2016
1e40135
[SPARK-17010][MINOR][DOC] Wrong description in memory management docu…
WangTaoTheTonic Aug 11, 2016
8611bc2
[SPARK-16866][SQL] Infrastructure for file-based SQL end-to-end tests
petermaxlee Aug 10, 2016
51b1016
[SPARK-17008][SPARK-17009][SQL] Normalization and isolation in SQLQue…
petermaxlee Aug 11, 2016
ea8a198
[SPARK-17007][SQL] Move test data files into a test-data folder
petermaxlee Aug 11, 2016
4b434e7
[SPARK-17011][SQL] Support testing exceptions in SQLQueryTestSuite
petermaxlee Aug 11, 2016
0ed6236
Correct example value for spark.ssl.YYY.XXX settings
ash211 Aug 11, 2016
33a213f
[SPARK-15899][SQL] Fix the construction of the file path with hadoop …
avulanov Aug 11, 2016
b87ba8f
Fix remaining undocumented/duplicated warnings
junyangq Aug 11, 2016
6bf20cd
[SPARK-17015][SQL] group-by/order-by ordinal and arithmetic tests
petermaxlee Aug 11, 2016
bc683f0
[SPARK-17018][SQL] literals.sql for testing literal parsing
petermaxlee Aug 11, 2016
0fb0149
[SPARK-17022][YARN] Handle potential deadlock in driver handling mess…
WangTaoTheTonic Aug 11, 2016
d2c1d64
Keep to the convention where we have docs for generic and the function.
junyangq Aug 12, 2016
b4047fc
[SPARK-16975][SQL] Column-partition path starting '_' should be handl…
dongjoon-hyun Aug 12, 2016
bde94cd
[SPARK-17013][SQL] Parse negative numeric literals
petermaxlee Aug 12, 2016
38378f5
[SPARK-12370][DOCUMENTATION] Documentation should link to examples …
jagadeesanas2 Aug 13, 2016
a21ecc9
[SPARK-17023][BUILD] Upgrade to Kafka 0.10.0.1 release
lresende Aug 13, 2016
750f880
[SPARK-16966][SQL][CORE] App Name is a randomUUID even when "spark.ap…
srowen Aug 13, 2016
e02d0d0
[SPARK-17027][ML] Avoid integer overflow in PolynomialExpansion.getPo…
zero323 Aug 14, 2016
8f4cacd
[SPARK-16508][SPARKR] Split docs for arrange and orderBy methods
junyangq Aug 15, 2016
4503632
[SPARK-17065][SQL] Improve the error message when encountering an inc…
zsxwing Aug 15, 2016
e5771a1
Fix docs for window functions
junyangq Aug 16, 2016
2e2c787
[SPARK-16964][SQL] Remove private[hive] from sql.hive.execution package
hvanhovell Aug 16, 2016
237ae54
Revert "[SPARK-16964][SQL] Remove private[hive] from sql.hive.executi…
rxin Aug 16, 2016
1c56971
[SPARK-16964][SQL] Remove private[sql] and private[spark] from sql.ex…
hvanhovell Aug 16, 2016
022230c
[SPARK-16519][SPARKR] Handle SparkR RDD generics that create warnings…
felixcheung Aug 16, 2016
6cb3eab
[SPARK-17089][DOCS] Remove api doc link for mapReduceTriplets operator
phalodi Aug 16, 2016
3e0163b
[SPARK-17084][SQL] Rename ParserUtils.assert to validate
hvanhovell Aug 17, 2016
68a24d3
[MINOR][DOC] Fix the descriptions for `properties` argument in the do…
Aug 17, 2016
22c7660
[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grow…
kiszk Aug 17, 2016
394d598
[SPARK-17102][SQL] bypass UserDefinedGenerator for json format check
cloud-fan Aug 17, 2016
9406f82
[SPARK-17096][SQL][STREAMING] Improve exception string reported throu…
tdas Aug 17, 2016
585d1d9
[SPARK-17038][STREAMING] fix metrics retrieval source of 'lastReceive…
keypointt Aug 17, 2016
91aa532
[SPARK-16995][SQL] TreeNodeException when flat mapping RelationalGrou…
viirya Aug 18, 2016
5735b8b
[SPARK-16391][SQL] Support partial aggregation for reduceGroups
rxin Aug 18, 2016
ec5f157
[SPARK-17117][SQL] 1 / NULL should not fail analysis
petermaxlee Aug 18, 2016
0bc3753
Fix part of undocumented/duplicated arguments warnings by CRAN-check
junyangq Aug 9, 2016
6d5233e
More fixes of the docs.
junyangq Aug 10, 2016
0edfd7d
Fix remaining undocumented/duplicated warnings
junyangq Aug 11, 2016
e72a6aa
Keep to the convention where we have docs for generic and the function.
junyangq Aug 12, 2016
afa69ed
Fix docs for window functions
junyangq Aug 16, 2016
c9cfe43
some fixes of R doc
junyangq Aug 18, 2016
3aafaa7
Move param docs from generic function to method definition.
junyangq Aug 18, 2016
315a0dd
some fixes of R doc
junyangq Aug 18, 2016
aa3d233
Move param docs from generic function to method definition.
junyangq Aug 18, 2016
71170e9
Solve conflicts.
junyangq Aug 18, 2016
2682719
Revert "Fix docs for window functions"
junyangq Aug 18, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
More fixes of the docs.
  • Loading branch information
junyangq committed Aug 10, 2016
commit 901edbb8a41231137796d823c8b6624460163b3a
9 changes: 5 additions & 4 deletions R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ setMethod("isLocal",
#' Print the first numRows rows of a SparkDataFrame
#'
#' @param numRows the number of rows to print. Defaults to 20.
#' @param truncate whether truncate long strings. If true, strings more than 20 characters will be
#' @param truncate whether truncate long strings. If \code{TRUE}, strings more than 20 characters will be
#' truncated. However, if set greater than zero, truncates strings longer than `truncate`
#' characters and all cells will be aligned right.
#' @family SparkDataFrame functions
Expand Down Expand Up @@ -916,6 +916,7 @@ setMethod("sample_frac",

#' Returns the number of rows in a SparkDataFrame
#'
#' @param x a SparkDataFrame.
#' @family SparkDataFrame functions
#' @rdname nrow
#' @name count
Expand Down Expand Up @@ -2847,7 +2848,7 @@ setMethod("fillna",
#' @param x a SparkDataFrame.
#' @param row.names NULL or a character vector giving the row names for the data frame.
#' @param optional If `TRUE`, converting column names is optional.
#' @param ... additional arguments passed to the method.
#' @param ... additional arguments to pass to base::as.data.frame.
#' @return A data.frame.
#' @family SparkDataFrame functions
#' @aliases as.data.frame,SparkDataFrame-method
Expand Down Expand Up @@ -3050,7 +3051,7 @@ setMethod("drop",
#'
#' @name histogram
#' @param nbins the number of bins (optional). Default value is 10.
#' @param col the column (described by character or Column object) to build the histogram from.
#' @param col the column as Character string or a Column to build the histogram from.
#' @param df the SparkDataFrame containing the Column to build the histogram from.
#' @return a data.frame with the histogram statistics, i.e., counts and centroids.
#' @rdname histogram
Expand Down Expand Up @@ -3185,7 +3186,7 @@ setMethod("histogram",
#' @param x A SparkDataFrame
#' @param url JDBC database url of the form `jdbc:subprotocol:subname`
#' @param tableName The name of the table in the external database
#' @param ... additional argument(s) passed to the method
#' @param ... additional JDBC database connection propertie(s).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the singular form is property and plural is properties, so the () doesn't really work in this case. let's just leave it as properties.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks!

#' @param mode One of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)
#' @family SparkDataFrame functions
#' @rdname write.jdbc
Expand Down
7 changes: 4 additions & 3 deletions R/pkg/R/SQLContext.R
Original file line number Diff line number Diff line change
Expand Up @@ -260,12 +260,13 @@ createDataFrame <- function(x, ...) {
dispatchFunc("createDataFrame(data, schema = NULL)", x, ...)
}

#' @param samplingRatio Currently not used.
#' @rdname createDataFrame
#' @aliases createDataFrame
#' @export
#' @method as.DataFrame default
#' @note as.DataFrame since 1.6.0
as.DataFrame.default <- function(data, schema = NULL) {
as.DataFrame.default <- function(data, schema = NULL, samplingRatio = 1.0) {
createDataFrame(data, schema)
}

Expand Down Expand Up @@ -729,7 +730,7 @@ dropTempView <- function(viewName) {
#' @param source The name of external data source
#' @param schema The data schema defined in structType
#' @param na.strings Default string value for NA when source is "csv"
#' @param ... additional argument(s) passed to the method.
#' @param ... additional external data source specific named propertie(s).
#' @return SparkDataFrame
#' @rdname read.df
#' @name read.df
Expand Down Expand Up @@ -844,7 +845,7 @@ createExternalTable <- function(x, ...) {
#' clause expressions used to split the column `partitionColumn` evenly.
#' This defaults to SparkContext.defaultParallelism when unset.
#' @param predicates a list of conditions in the where clause; each one defines one partition
#' @param ... additional argument(s) passed to the method.
#' @param ... additional JDBC database connection named propertie(s).
#' @return SparkDataFrame
#' @rdname read.jdbc
#' @name read.jdbc
Expand Down
22 changes: 11 additions & 11 deletions R/pkg/R/functions.R
Original file line number Diff line number Diff line change
Expand Up @@ -445,8 +445,8 @@ setMethod("cosh",
#'
#' Returns the number of items in a group. This is a column aggregate function.
#'
#' @rdname nrow
#' @name count
#' @rdname n
#' @name n
#' @family agg_funcs
#' @aliases count,Column-method
#' @export
Expand Down Expand Up @@ -1272,9 +1272,9 @@ setMethod("round",
#' bround(2.5, 0) = 2, bround(3.5, 0) = 4.
#'
#' @param x Column to compute on.
#' @param scale round to `scale` digits to the right of the decimal point when `scale` > 0,
#' the nearest even number when `scale` = 0, and `scale` digits to the left
#' of the decimal point when `scale` <= 0.
#' @param scale round to \code{scale} digits to the right of the decimal point when \code{scale} > 0,
#' the nearest even number when \code{scale} = 0, and `scale` digits to the left
#' of the decimal point when \code{scale} < 0.
#' @rdname bround
#' @name bround
#' @family math_funcs
Expand Down Expand Up @@ -1557,7 +1557,7 @@ setMethod("stddev_samp",
#' Creates a new struct column that composes multiple input columns.
#'
#' @param x a column to compute on.
#' @param ... additional column(s) to be included.
#' @param ... optional column(s) to be included.
#'
#' @rdname struct
#' @name struct
Expand Down Expand Up @@ -2269,8 +2269,8 @@ setMethod("n_distinct", signature(x = "Column"),
countDistinct(x, ...)
})

#' @rdname nrow
#'
#' @param x a Column.
#' @rdname n
#' @name n
#' @aliases n,Column-method
#' @export
Expand Down Expand Up @@ -2649,7 +2649,7 @@ setMethod("expr", signature(x = "character"),
#'
#' @param format a character object of format strings.
#' @param x a Column object.
#' @param ... additional columns.
#' @param ... additional Column(s).
#' @family string_funcs
#' @rdname format_string
#' @name format_string
Expand Down Expand Up @@ -3034,8 +3034,8 @@ setMethod("when", signature(condition = "Column", value = "ANY"),
#' Otherwise \code{no} is returned for unmatched conditions.
#'
#' @param test a Column expression that describes the condition.
#' @param yes return values for true elements of test.
#' @param no return values for false elements of test.
#' @param yes return values for \code{TRUE} elements of test.
#' @param no return values for \code{FALSE} elements of test.
#' @family normal_funcs
#' @rdname ifelse
#' @name ifelse
Expand Down
7 changes: 3 additions & 4 deletions R/pkg/R/generics.R
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ setGeneric("collectPartition",
standardGeneric("collectPartition")
})

# @rdname nrow
# @rdname count
# @export
setGeneric("count", function(x) { standardGeneric("count") })

Expand Down Expand Up @@ -1059,8 +1059,7 @@ setGeneric("month", function(x) { standardGeneric("month") })
#' @export
setGeneric("months_between", function(y, x) { standardGeneric("months_between") })

#' @param x a SparkDataFrame or a Column object.
#' @rdname nrow
#' @rdname n
#' @export
setGeneric("n", function(x) { standardGeneric("n") })

Expand Down Expand Up @@ -1315,7 +1314,7 @@ setGeneric("spark.naiveBayes", function(data, formula, ...) { standardGeneric("s

#' @rdname spark.survreg
#' @export
setGeneric("spark.survreg", function(data, formula, ...) { standardGeneric("spark.survreg") })
setGeneric("spark.survreg", function(data, formula) { standardGeneric("spark.survreg") })

#' @rdname write.ml
#' @export
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/group.R
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ setMethod("show", "GroupedData",
#' Count the number of rows for each group.
#' The resulting SparkDataFrame will also contain the grouping columns.
#'
#' @param x a GroupedData
#' @return a SparkDataFrame
#' @param x a GroupedData.
#' @return A SparkDataFrame.
#' @rdname count
#' @aliases count,GroupedData-method
#' @export
Expand Down
13 changes: 6 additions & 7 deletions R/pkg/R/mllib.R
Original file line number Diff line number Diff line change
Expand Up @@ -354,8 +354,8 @@ setMethod("spark.kmeans", signature(data = "SparkDataFrame", formula = "formula"
#' Note: A saved-loaded model does not support this method.
#'
#' @param object a fitted k-means model.
#' @param method type of fitted results, `"centers"` for cluster centers
#' or `"classes"` for assigned classes.
#' @param method type of fitted results, \code{"centers"} for cluster centers
#' or \code{"classes"} for assigned classes.
#' @param ... additional argument(s) passed to the method.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove ... in the function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same as above...

#' @return \code{fitted} returns a SparkDataFrame containing fitted values.
#' @rdname fitted
Expand Down Expand Up @@ -428,8 +428,8 @@ setMethod("predict", signature(object = "KMeansModel"),
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param ... additional argument(s) passed to the method. Currently only \code{smoothing}.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is removed, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually for the generic function. Should I move these doc to the generic definition?

#' @param smoothing smoothing parameter.
#' @param ... additional parameter(s) passed to the method.
#' @return \code{spark.naiveBayes} returns a fitted naive Bayes model.
#' @rdname spark.naiveBayes
#' @aliases spark.naiveBayes,SparkDataFrame,formula-method
Expand Down Expand Up @@ -457,7 +457,7 @@ setMethod("predict", signature(object = "KMeansModel"),
#' }
#' @note spark.naiveBayes since 2.0.0
setMethod("spark.naiveBayes", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, smoothing = 1.0, ...) {
function(data, formula, smoothing = 1.0) {
formula <- paste(deparse(formula), collapse = "")
jobj <- callJStatic("org.apache.spark.ml.r.NaiveBayesWrapper", "fit",
formula, data@sdf, smoothing)
Expand Down Expand Up @@ -577,8 +577,7 @@ read.ml <- function(path) {
#' @param data a SparkDataFrame for training.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', ':', '+', and '-'.
#' Note that operator '.' is not supported currently
#' @param ... additional argument(s) passed to the method.
#' Note that operator '.' is not supported currently.
#' @return \code{spark.survreg} returns a fitted AFT survival regression model.
#' @rdname spark.survreg
#' @seealso survival: \url{https://cran.r-project.org/web/packages/survival/}
Expand All @@ -603,7 +602,7 @@ read.ml <- function(path) {
#' }
#' @note spark.survreg since 2.0.0
setMethod("spark.survreg", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, ...) {
function(data, formula) {
formula <- paste(deparse(formula), collapse = "")
jobj <- callJStatic("org.apache.spark.ml.r.AFTSurvivalRegressionWrapper",
"fit", formula, data@sdf)
Expand Down
16 changes: 8 additions & 8 deletions R/pkg/R/sparkR.R
Original file line number Diff line number Diff line change
Expand Up @@ -320,15 +320,15 @@ sparkRHive.init <- function(jsc = NULL) {
#' For details on how to initialize and use SparkR, refer to SparkR programming guide at
#' \url{http://spark.apache.org/docs/latest/sparkr.html#starting-up-sparksession}.
#'
#' @param master The Spark master URL
#' @param appName Application name to register with cluster manager
#' @param sparkHome Spark Home directory
#' @param sparkConfig Named list of Spark configuration to set on worker nodes
#' @param sparkJars Character vector of jar files to pass to the worker nodes
#' @param sparkPackages Character vector of packages from spark-packages.org
#' @param enableHiveSupport Enable support for Hive, fallback if not built with Hive support; once
#' @param master the Spark master URL.
#' @param appName application name to register with cluster manager.
#' @param sparkHome Spark Home directory.
#' @param sparkConfig named list of Spark configuration to set on worker nodes.
#' @param sparkJars character vector of jar files to pass to the worker nodes.
#' @param sparkPackages character vector of packages from spark-packages.org
#' @param enableHiveSupport enable support for Hive, fallback if not built with Hive support; once
#' set, this cannot be turned off on an existing session
#' @param ... additional parameters passed to the method
#' @param ... named Spark properties passed to the method.
#' @export
#' @examples
#'\dontrun{
Expand Down