Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
461714d
SPARK-10807. Added as.data.frame as a synonym for collect().
Sep 24, 2015
e9e34b5
Removed operator %++%, which is a synonym for paste()
Sep 24, 2015
c65b682
Removed extra blank space.
Sep 24, 2015
cee871c
Removed extra spaces to comply with R style
Sep 24, 2015
0851163
Moved setGeneric declaration to generics.R.
Sep 28, 2015
7a8e62a
Added test cases for as.data.frame
Sep 28, 2015
de6d164
Merge remote-tracking branch 'origin/SPARK-10807' into SPARK-10807
Sep 28, 2015
a346cc6
Changed setMethod declaration to comply with standard
Sep 28, 2015
6c4dcbc
Removed changes to .gitignore
Sep 30, 2015
99e6304
Merge remote-tracking branch 'upstream/master'
Oct 5, 2015
30c5d26
coltypes
Oct 5, 2015
4a92d99
Merged
Nov 6, 2015
a68f97a
coltypes
Oct 5, 2015
360156c
coltypes
Oct 5, 2015
0c2da6c
Added more types. Scala types that can't be mapped to R will remain a…
Oct 9, 2015
909e4e3
Removed white space
Oct 9, 2015
3cd2079
Added more tests
Oct 9, 2015
a7723d9
Fixed typo
Oct 9, 2015
0a0b278
Fixed typo
Oct 9, 2015
7e89935
Moved coltypes to new file types.R and refactored schema.R
Oct 19, 2015
21c0799
Updated DESCRIPTION file to add types.R
Oct 19, 2015
fee5a2e
Updated DESCRIPTION file
Oct 19, 2015
e1056ab
Coding style for setGeneric definition
Oct 20, 2015
75f5ced
Coding style
Oct 20, 2015
908abf4
Coding style
Oct 20, 2015
37bdc46
Fixed data type mapping. Put data types in an environment for more ef…
Nov 3, 2015
3b5c2d5
Removed unnecessary cat
Nov 3, 2015
001884a
Removed white space
Nov 3, 2015
eaaf178
Removed blank space
Nov 3, 2015
9a9618e
Update DataFrame.R
Nov 3, 2015
25faa4e
Update types.R
Nov 4, 2015
57a47a4
Update types.R
Nov 4, 2015
e5ab466
Update DataFrame.R
Nov 4, 2015
772de99
Added tests for complex types
Nov 4, 2015
67b12a4
Update types.R
Nov 4, 2015
0bb39dc
Update types.R
Nov 4, 2015
8aa13ef
Update test_sparkSQL.R
Nov 4, 2015
9b36955
Removed for loop
Nov 5, 2015
95a8ece
Update DataFrame.R
Nov 5, 2015
462b1f1
Update DataFrame.R
Nov 5, 2015
cd033c0
Removed blank space
Nov 5, 2015
ba091fb
Merge tests and description files
Nov 6, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
coltypes
  • Loading branch information
Oscar D. Lara Yejas committed Nov 6, 2015
commit a68f97a557df528b929595ef6487cd515a232d2f
4 changes: 3 additions & 1 deletion R/pkg/NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,11 @@ export("setJobGroup",
exportClasses("DataFrame")

exportMethods("arrange",
"attach",
"as.data.frame",
"attach",
"cache",
"collect",
"coltypes",
"columns",
"count",
"cov",
Expand Down
29 changes: 29 additions & 0 deletions R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -2102,6 +2102,7 @@ setMethod("as.data.frame",
stop(paste("Unused argument(s): ", paste(list(...), collapse=", ")))
}
collect(x)
<<<<<<< HEAD
})

#' The specified DataFrame is attached to the R search path. This means that
Expand Down Expand Up @@ -2152,3 +2153,31 @@ setMethod("with",
newEnv <- assignNewEnv(data)
eval(substitute(expr), envir = newEnv, enclos = newEnv)
})

#' Returns the column types of a DataFrame.
#'
#' @name coltypes
#' @title Get column types of a DataFrame
#' @param x (DataFrame)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you update the style of function description to be more consistent with other existing ones?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can change this when updating my PR #9218

#' @return value (character) A character vector with the column types of the given DataFrame
#' @rdname coltypes
setMethod("coltypes",
signature(x = "DataFrame"),
function(x) {
# TODO: This may be moved as a global parameter
# These are the supported data types and how they map to
# R's data types
DATA_TYPES <- c("string"="character",
"double"="numeric",
"int"="integer",
"long"="integer",
"boolean"="long"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You only handle primitive types here, but no complex types, like Array, Struct and Map.

It would be better you can refactor the type mapping related code here and that in SerDe.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sun-rui For complex types (Array/Struct/Map), I can't think of any mapping to R types. Therefore, as agreed with @felixcheung and @shivaram, these will remain the same. For example:

Original column types: ["string", "boolean", "map..."]
Result of coltypes(): ["character", "logical", "map..."]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@olarayej I think the fall back mechanism here is good. But @sun-rui makes another good point that it will be good to have one unified place where we do a mapping from R types to java types. Right now part of that is in serialize.R / deserialize.R

Could you see if there is some refactoring we could do for this to not be duplicated ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sun-rui @shivaram
The notion of coltypes is actually spread in three files: schema.R, serialize.R, deserialize.R.

In file serialize.R, method writeType (see below) turns the full data type into a one-character string. Then, method readTypedObject (see below), uses this one-character type to read accordingly. I suspect this is because complex types could be like map (String,String)?

In my opinion, it would be better to use the full data type, as opposed to the first letter (which could be especially confusing since we support data types starting with the same letter Date/Double, String/Struct). Also, having the full data type would allow for centralizing the data types in one place, though this would require some major changes

We could have mapping arrays:

PRIMITIVE_TYPES <- c("string"="character",
"long"="integer",
"tinyint"="integer",
"short"="integer",
"integer"="integer",
"byte"="integer",
"double"="numeric",
"float"="numeric",
"decimal"="numeric",
"boolean"="logical")

COMPLEX_TYPES <- c("map", "array", "struct", ...)

DATA_TYPES <- c(PRIMITIVE_TYPES, COMPLEX_TYPES)

And then we'd need to modify deserialize.R, serialize.R, and schema.R to acknowledge these accordingly.

Thoughts?

writeType <- function(con, class) {
type <- switch(class,
NULL = "n",
integer = "i",
character = "c",
logical = "b",
double = "d",
numeric = "d",
raw = "r",
array = "a",
list = "l",
struct = "s",
jobj = "j",
environment = "e",
Date = "D",
POSIXlt = "t",
POSIXct = "t",
stop(paste("Unsupported type for serialization", class)))
writeBin(charToRaw(type), con)
}

readTypedObject <- function(con, type) {
switch (type,
"i" = readInt(con),
"c" = readString(con),
"b" = readBoolean(con),
"d" = readDouble(con),
"r" = readRaw(con),
"D" = readDate(con),
"t" = readTime(con),
"a" = readArray(con),
"l" = readList(con),
"e" = readEnv(con),
"s" = readStruct(con),
"n" = NULL,
"j" = getJobj(readString(con)),
stop(paste("Unsupported type for deserialization", type)))
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The single character names are to reduce the amount of data serialized when we transfer these data types to the JVM. Its not meant to be remembered by anybody so I don't see it being a source of confusion. @sun-rui also added tests which ensure these mappings don't break.

However I think having a list of primitive types, complex types and mapping in a common file (types.R ?) sounds good to me.


# Get the data types of the DataFrame by invoking dtypes() function.
# Some post-processing is needed.
types <- as.character(t(as.data.frame(dtypes(x))[2, ]))

# Map Spark data types into R's data types
as.character(DATA_TYPES[types])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you check for the case when it doesn't match the known types?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@felixcheung Yeah, that's a good point. I'm thinking coltypes() should always have an equivalent R data type for each column. We don't want method coltypes() to return NA's or throw an unsupported-type error cuz that would mean that the input DataFrame is inconsistent.

Therefore, it'd be just a matter of putting in DATA_TYPES, the list all possible values returned by dtypes() (If I'm missing any). I couldn't find that in the docs. Could you point me to the list?

Finally, I think the check for unsupported data types should be done instead in the coltypes()<- method and in the DataFrame initialization. coltypes() assumes the input DataFrame was assigned valid data types, which makes sense to me.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@felixcheung, @shivaram: Any thoughts on this one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

http://spark.apache.org/docs/latest/sql-programming-guide.html#data-types is a list that might be helpful.

Also I think it might make sense to try and map them to R types and if we fail to find a relevant one we fallback to the SparkSQL type

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shivaram I agree. I could use the mapping below (got the short types from schema.R:118):
scala -> R
"string"="character",
"long"="integer",
"short"="integer",
"integer"="integer"
"byte"="integer",
"double"="numeric",
"float"="numeric",
"decimal"="numeric",
"boolean"="logical"

In any other case, I will use the same scala type. Sounds good?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. This sounds good.

})
5 changes: 4 additions & 1 deletion R/pkg/R/generics.R
Original file line number Diff line number Diff line change
Expand Up @@ -1027,7 +1027,6 @@ setGeneric("weekofyear", function(x) { standardGeneric("weekofyear") })
#' @export
setGeneric("year", function(x) { standardGeneric("year") })


#' @rdname glm
#' @export
setGeneric("glm")
Expand All @@ -1047,3 +1046,7 @@ setGeneric("attach")
#' @rdname with
#' @export
setGeneric("with")

#' @rdname coltypes
#' @export
setGeneric("coltypes", function(x) standardGeneric("coltypes"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: { standardGeneric("coltypes") }

10 changes: 8 additions & 2 deletions R/pkg/inst/tests/test_sparkSQL.R
Original file line number Diff line number Diff line change
Expand Up @@ -1460,13 +1460,15 @@ test_that("SQL error message is returned from JVM", {
expect_equal(grepl("Table not found: blah", retError), TRUE)
})

irisDF <- createDataFrame(sqlContext, iris)

test_that("Method as.data.frame as a synonym for collect()", {
irisDF <- createDataFrame(sqlContext, iris)
expect_equal(as.data.frame(irisDF), collect(irisDF))
irisDF2 <- irisDF[irisDF$Species == "setosa", ]
expect_equal(as.data.frame(irisDF2), collect(irisDF2))
})

<<<<<<< HEAD
test_that("attach() on a DataFrame", {
df <- jsonFile(sqlContext, jsonPath)
expect_error(age)
Expand Down Expand Up @@ -1496,6 +1498,10 @@ test_that("with() on a DataFrame", {
expect_equal(nrow(sum2), 35)
})

test_that("Method coltypes() to get R's data types of a DataFrame", {
expect_equal(coltypes(irisDF), c(rep("numeric", 4), "character"))
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a test with some other types ? Also another one which runs into the NA case and uses the SQL type would be useful.


unlink(parquetPath)
unlink(jsonPath)
unlink(jsonPathNa)
unlink(jsonPathNa)