Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
0092abb
Some minor cleanup after SPARK-4550.
sryza May 6, 2015
1fd31ba
[SPARK-6231][SQL/DF] Automatically resolve join condition ambiguity f…
rxin May 6, 2015
51b3d41
Revert "[SPARK-3454] separate json endpoints for data in the UI"
rxin May 6, 2015
a466944
[SPARK-6841] [SPARKR] add support for mean, median, stdev etc.
hqzizania May 6, 2015
ba2b566
[SPARK-7358][SQL] Move DataFrame mathfunctions into functions
brkyvz May 6, 2015
7b14578
[SPARK-6267] [MLLIB] Python API for IsotonicRegression
yanboliang May 6, 2015
9f019c7
[SPARK-7384][Core][Tests] Fix flaky tests for distributed mode in Bro…
zsxwing May 6, 2015
32cdc81
[SPARK-6940] [MLLIB] Add CrossValidator to Python ML pipeline API
mengxr May 6, 2015
322e7e7
[SQL] JavaDoc update for various DataFrame functions.
rxin May 6, 2015
150f671
[SPARK-5456] [SQL] fix decimal compare for jdbc rdd
adrian-wang May 6, 2015
c3eb441
[SPARK-6201] [SQL] promote string and do widen types for IN
adrian-wang May 6, 2015
f2c4708
[SPARK-1442] [SQL] Window Function Support for Spark SQL
yhuai May 6, 2015
002c123
[SPARK-7311] Introduce internal Serializer API for determining if ser…
JoshRosen May 6, 2015
845d1d4
Add `Private` annotation.
JoshRosen May 6, 2015
7740996
[HOT-FIX] Move HiveWindowFunctionQuerySuite.scala to hive compatibili…
yhuai May 6, 2015
1ad04da
[SPARK-5995] [ML] Make Prediction dev API public
jkbradley May 6, 2015
fbf1f34
[HOT FIX] [SPARK-7418] Ignore flaky SparkSubmitUtilsSuite test
May 7, 2015
4e93042
[SPARK-6799] [SPARKR] Remove SparkR RDD examples, add dataframe examples
shivaram May 7, 2015
316a5c0
[SPARK-7396] [STREAMING] [EXAMPLE] Update KafkaWordCountProducer to u…
jerryshao May 7, 2015
8fa6829
[SPARK-7371] [SPARK-7377] [SPARK-7408] DAG visualization addendum (#5…
May 7, 2015
71a452b
[HOT FIX] For DAG visualization #5954
May 7, 2015
14502d5
[SPARK-7405] [STREAMING] Fix the bug that ReceiverInputDStream doesn'…
zsxwing May 7, 2015
773aa25
[SPARK-7432] [MLLIB] disable cv doctest
mengxr May 7, 2015
9cfa9a5
[SPARK-6812] [SPARKR] filter() on DataFrame does not work as expected.
May 7, 2015
2d6612c
[SPARK-5938] [SPARK-5443] [SQL] Improve JsonRDD performance
May 7, 2015
cfdadcb
[SPARK-7430] [STREAMING] [TEST] General improvements to streaming tes…
tdas May 7, 2015
01187f5
[SPARK-7217] [STREAMING] Add configuration to control the default beh…
tdas May 7, 2015
fa8fddf
[SPARK-7295][SQL] bitwise operations for DataFrame DSL
Shiti May 7, 2015
fae4e2d
[SPARK-7035] Encourage __getitem__ over __getattr__ on column access …
ksonj May 7, 2015
8b6b46e
[SPARK-7421] [MLLIB] OnlineLDA cleanups
jkbradley May 7, 2015
4f87e95
[SPARK-7429] [ML] Params cleanups
jkbradley May 7, 2015
ed9be06
[SPARK-7330] [SQL] avoid NPE at jdbc rdd
adrian-wang May 7, 2015
9e2ffb1
[SPARK-7388] [SPARK-7383] wrapper for VectorAssembler in Python
brkyvz May 7, 2015
068c315
[SPARK-7118] [Python] Add the coalesce Spark SQL function available i…
May 7, 2015
1712a7c
[SPARK-6093] [MLLIB] Add RegressionMetrics in PySpark/MLlib
yanboliang May 7, 2015
5784c8d
[SPARK-1442] [SQL] [FOLLOW-UP] Address minor comments in Window Funct…
yhuai May 7, 2015
dec8f53
[SPARK-7116] [SQL] [PYSPARK] Remove cache() causing memory leak
ksonj May 7, 2015
074d75d
[SPARK-5213] [SQL] Remove the duplicated SparkSQLParser
chenghao-intel May 7, 2015
0c33bf8
[SPARK-7399] [SPARK CORE] Fixed compilation error in scala 2.11
May 7, 2015
4eecf55
[SPARK-7373] [MESOS] Add docker support for launching drivers in meso…
tnachen May 7, 2015
f121651
[SPARK-7391] DAG visualization: auto expand if linked from another viz
May 7, 2015
88717ee
[SPARK-7347] DAG visualization: add tooltips to RDDs
May 7, 2015
347a329
[SPARK-7328] [MLLIB] [PYSPARK] Pyspark.mllib.linalg.Vectors: Missing …
MechCoder May 7, 2015
658a478
[SPARK-5726] [MLLIB] Elementwise (Hadamard) Vector Product Transformer
ogeagla May 7, 2015
e43803b
[SPARK-6948] [MLLIB] compress vectors in VectorAssembler
mengxr May 7, 2015
97d1182
[SQL] [MINOR] make star and multialias extend NamedExpression
scwf May 7, 2015
ea3077f
[SPARK-7277] [SQL] Throw exception if the property mapred.reduce.task…
viirya May 7, 2015
937ba79
[SPARK-5281] [SQL] Registering table on RDD is giving MissingRequirem…
dragos May 7, 2015
35f0173
[SPARK-2155] [SQL] [WHEN D THEN E] [ELSE F] add CaseKeyWhen for "CASE…
cloud-fan May 7, 2015
88063c6
[SPARK-7450] Use UNSAFE.getLong() to speed up BitSetMethods#anySet()
tedyu May 7, 2015
22ab70e
[SPARK-7305] [STREAMING] [WEBUI] Make BatchPage show friendly informa…
zsxwing May 8, 2015
cd1d411
[SPARK-6908] [SQL] Use isolated Hive client
marmbrus May 8, 2015
92f8f80
[SPARK-7452] [MLLIB] fix bug in topBykey and update test
coderxiang May 8, 2015
3af423c
[SPARK-6986] [SQL] Use Serializer2 in more cases.
yhuai May 8, 2015
714db2e
[SPARK-7470] [SQL] Spark shell SQLContext crashes without hive
May 8, 2015
f496bf3
[SPARK-7232] [SQL] Add a Substitution batch for spark sql analyzer
scwf May 8, 2015
c2f0821
[SPARK-7392] [CORE] bugfix: Kryo buffer size cannot be larger than 2M
liyezhang556520 May 8, 2015
ebff732
[SPARK-6869] [PYSPARK] Add pyspark archives path to PYTHONPATH
lianhuiwang May 8, 2015
c796be7
[SPARK-3454] separate json endpoints for data in the UI
squito May 8, 2015
f5ff4a8
[SPARK-7383] [ML] Feature Parity in PySpark for ml.features
brkyvz May 8, 2015
65afd3c
[SPARK-7474] [MLLIB] update ParamGridBuilder doctest
mengxr May 8, 2015
008a60d
[SPARK-6824] Fill the docs for DataFrame API in SparkR
hqzizania May 8, 2015
35d6a99
[SPARK-7436] Fixed instantiation of custom recovery mode factory and …
jacek-lewandowski May 8, 2015
a1ec08f
[SPARK-7298] Harmonize style of new visualizations
mateiz May 8, 2015
2d05f32
[SPARK-7133] [SQL] Implement struct, array, and map field accessor
cloud-fan May 8, 2015
4b3bb0e
[SPARK-6627] Finished rename to ShuffleBlockResolver
kayousterhout May 8, 2015
25889d8
[SPARK-7490] [CORE] [Minor] MapOutputTracker.deserializeMapStatuses: …
May 8, 2015
dc71e47
[MINOR] Ignore python/lib/pyspark.zip
zsxwing May 8, 2015
c45c09b
[WEBUI] Remove debug feature for vis.js
sarutak May 8, 2015
4e7360e
[SPARK-7489] [SPARK SHELL] Spark shell crashes when compiled with sca…
vinodkc May 8, 2015
31da40d
[MINOR] Defeat early garbage collection of test suite variable
tellison May 8, 2015
3b0c5e7
[SPARK-7466] DAG visualization: fix orphan nodes
May 8, 2015
9042f8f
[MINOR] [CORE] Allow History Server to read kerberos opts from config…
May 8, 2015
5467c34
[SPARK-7378] [CORE] Handle deep links to unloaded apps.
May 8, 2015
90527f5
[SPARK-7390] [SQL] Only merge other CovarianceCounter when its count …
viirya May 8, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[SPARK-2155] [SQL] [WHEN D THEN E] [ELSE F] add CaseKeyWhen for "CASE…
… a WHEN b THEN c * END"

Avoid translating to CaseWhen and evaluate the key expression many times.

Author: Wenchen Fan <[email protected]>

Closes apache#5979 from cloud-fan/condition and squashes the following commits:

3ce54e1 [Wenchen Fan] add CaseKeyWhen
  • Loading branch information
cloud-fan authored and marmbrus committed May 7, 2015
commit 35f0173b8f67e2e506fc4575be6430cfb66e2238
Original file line number Diff line number Diff line change
Expand Up @@ -296,13 +296,13 @@ class SqlParser extends AbstractSparkSQLParser with DataTypeParser {
| LOWER ~ "(" ~> expression <~ ")" ^^ { case exp => Lower(exp) }
| IF ~ "(" ~> expression ~ ("," ~> expression) ~ ("," ~> expression) <~ ")" ^^
{ case c ~ t ~ f => If(c, t, f) }
| CASE ~> expression.? ~ (WHEN ~> expression ~ (THEN ~> expression)).* ~
| CASE ~> expression.? ~ rep1(WHEN ~> expression ~ (THEN ~> expression)) ~
(ELSE ~> expression).? <~ END ^^ {
case casePart ~ altPart ~ elsePart =>
val altExprs = altPart.flatMap { case whenExpr ~ thenExpr =>
Seq(casePart.fold(whenExpr)(EqualTo(_, whenExpr)), thenExpr)
}
CaseWhen(altExprs ++ elsePart.toList)
val branches = altPart.flatMap { case whenExpr ~ thenExpr =>
Seq(whenExpr, thenExpr)
} ++ elsePart
casePart.map(CaseKeyWhen(_, branches)).getOrElse(CaseWhen(branches))
}
| (SUBSTR | SUBSTRING) ~ "(" ~> expression ~ ("," ~> expression) <~ ")" ^^
{ case s ~ p => Substring(s, p, Literal(Integer.MAX_VALUE)) }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -631,31 +631,24 @@ trait HiveTypeCoercion {
import HiveTypeCoercion._

def apply(plan: LogicalPlan): LogicalPlan = plan transformAllExpressions {
case cw @ CaseWhen(branches) if !cw.resolved && !branches.exists(!_.resolved) =>
val valueTypes = branches.sliding(2, 2).map {
case Seq(_, value) => value.dataType
case Seq(elseVal) => elseVal.dataType
}.toSeq

logDebug(s"Input values for null casting ${valueTypes.mkString(",")}")

if (valueTypes.distinct.size > 1) {
val commonType = valueTypes.reduce { (v1, v2) =>
findTightestCommonType(v1, v2)
.getOrElse(sys.error(
s"Types in CASE WHEN must be the same or coercible to a common type: $v1 != $v2"))
}
val transformedBranches = branches.sliding(2, 2).map {
case Seq(cond, value) if value.dataType != commonType =>
Seq(cond, Cast(value, commonType))
case Seq(elseVal) if elseVal.dataType != commonType =>
Seq(Cast(elseVal, commonType))
case s => s
}.reduce(_ ++ _)
CaseWhen(transformedBranches)
} else {
// Types match up. Hopefully some other rule fixes whatever is wrong with resolution.
cw
case cw: CaseWhenLike if !cw.resolved && cw.childrenResolved && !cw.valueTypesEqual =>
logDebug(s"Input values for null casting ${cw.valueTypes.mkString(",")}")
val commonType = cw.valueTypes.reduce { (v1, v2) =>
findTightestCommonType(v1, v2).getOrElse(sys.error(
s"Types in CASE WHEN must be the same or coercible to a common type: $v1 != $v2"))
}
val transformedBranches = cw.branches.sliding(2, 2).map {
case Seq(when, value) if value.dataType != commonType =>
Seq(when, Cast(value, commonType))
case Seq(elseVal) if elseVal.dataType != commonType =>
Seq(Cast(elseVal, commonType))
case s => s
}.reduce(_ ++ _)
cw match {
case _: CaseWhen =>
CaseWhen(transformedBranches)
case CaseKeyWhen(key, _) =>
CaseKeyWhen(key, transformedBranches)
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ abstract class Expression extends TreeNode[Expression] {
* Returns true if all the children of this expression have been resolved to a specific schema
* and false if any still contains any unresolved placeholders.
*/
def childrenResolved: Boolean = !children.exists(!_.resolved)
def childrenResolved: Boolean = children.forall(_.resolved)

/**
* Returns a string representation of this expression that does not have developer centric
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -353,79 +353,134 @@ case class If(predicate: Expression, trueValue: Expression, falseValue: Expressi
override def toString: String = s"if ($predicate) $trueValue else $falseValue"
}

trait CaseWhenLike extends Expression {
self: Product =>

type EvaluatedType = Any

// Note that `branches` are considered in consecutive pairs (cond, val), and the optional last
// element is the value for the default catch-all case (if provided).
// Hence, `branches` consists of at least two elements, and can have an odd or even length.
def branches: Seq[Expression]

@transient lazy val whenList =
branches.sliding(2, 2).collect { case Seq(whenExpr, _) => whenExpr }.toSeq
@transient lazy val thenList =
branches.sliding(2, 2).collect { case Seq(_, thenExpr) => thenExpr }.toSeq
val elseValue = if (branches.length % 2 == 0) None else Option(branches.last)

// both then and else val should be considered.
def valueTypes: Seq[DataType] = (thenList ++ elseValue).map(_.dataType)
def valueTypesEqual: Boolean = valueTypes.distinct.size <= 1

override def dataType: DataType = {
if (!resolved) {
throw new UnresolvedException(this, "cannot resolve due to differing types in some branches")
}
valueTypes.head
}

override def nullable: Boolean = {
// If no value is nullable and no elseValue is provided, the whole statement defaults to null.
thenList.exists(_.nullable) || (elseValue.map(_.nullable).getOrElse(true))
}
}

// scalastyle:off
/**
* Case statements of the form "CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END".
* Refer to this link for the corresponding semantics:
* https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ConditionalFunctions
*
* The other form of case statements "CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END" gets
* translated to this form at parsing time. Namely, such a statement gets translated to
* "CASE WHEN a=b THEN c [WHEN a=d THEN e]* [ELSE f] END".
*
* Note that `branches` are considered in consecutive pairs (cond, val), and the optional last
* element is the value for the default catch-all case (if provided). Hence, `branches` consists of
* at least two elements, and can have an odd or even length.
*/
// scalastyle:on
case class CaseWhen(branches: Seq[Expression]) extends Expression {
type EvaluatedType = Any
case class CaseWhen(branches: Seq[Expression]) extends CaseWhenLike {

// Use private[this] Array to speed up evaluation.
@transient private[this] lazy val branchesArr = branches.toArray

override def children: Seq[Expression] = branches

override def dataType: DataType = {
if (!resolved) {
throw new UnresolvedException(this, "cannot resolve due to differing types in some branches")
override lazy val resolved: Boolean =
childrenResolved &&
whenList.forall(_.dataType == BooleanType) &&
valueTypesEqual

/** Written in imperative fashion for performance considerations. */
override def eval(input: Row): Any = {
val len = branchesArr.length
var i = 0
// If all branches fail and an elseVal is not provided, the whole statement
// defaults to null, according to Hive's semantics.
while (i < len - 1) {
if (branchesArr(i).eval(input) == true) {
return branchesArr(i + 1).eval(input)
}
i += 2
}
var res: Any = null
if (i == len - 1) {
res = branchesArr(i).eval(input)
}
branches(1).dataType
return res
}

override def toString: String = {
"CASE" + branches.sliding(2, 2).map {
case Seq(cond, value) => s" WHEN $cond THEN $value"
case Seq(elseValue) => s" ELSE $elseValue"
}.mkString
}
}

// scalastyle:off
/**
* Case statements of the form "CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END".
* Refer to this link for the corresponding semantics:
* https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ConditionalFunctions
*/
// scalastyle:on
case class CaseKeyWhen(key: Expression, branches: Seq[Expression]) extends CaseWhenLike {

// Use private[this] Array to speed up evaluation.
@transient private[this] lazy val branchesArr = branches.toArray
@transient private[this] lazy val predicates =
branches.sliding(2, 2).collect { case Seq(cond, _) => cond }.toSeq
@transient private[this] lazy val values =
branches.sliding(2, 2).collect { case Seq(_, value) => value }.toSeq
@transient private[this] lazy val elseValue =
if (branches.length % 2 == 0) None else Option(branches.last)

override def nullable: Boolean = {
// If no value is nullable and no elseValue is provided, the whole statement defaults to null.
values.exists(_.nullable) || (elseValue.map(_.nullable).getOrElse(true))
}
override def children: Seq[Expression] = key +: branches

override lazy val resolved: Boolean = {
if (!childrenResolved) {
false
} else {
val allCondBooleans = predicates.forall(_.dataType == BooleanType)
// both then and else val should be considered.
val dataTypesEqual = (values ++ elseValue).map(_.dataType).distinct.size <= 1
allCondBooleans && dataTypesEqual
}
}
override lazy val resolved: Boolean =
childrenResolved && valueTypesEqual

/** Written in imperative fashion for performance considerations. */
override def eval(input: Row): Any = {
val evaluatedKey = key.eval(input)
val len = branchesArr.length
var i = 0
// If all branches fail and an elseVal is not provided, the whole statement
// defaults to null, according to Hive's semantics.
var res: Any = null
while (i < len - 1) {
if (branchesArr(i).eval(input) == true) {
res = branchesArr(i + 1).eval(input)
return res
if (equalNullSafe(evaluatedKey, branchesArr(i).eval(input))) {
return branchesArr(i + 1).eval(input)
}
i += 2
}
var res: Any = null
if (i == len - 1) {
res = branchesArr(i).eval(input)
}
res
return res
}

private def equalNullSafe(l: Any, r: Any) = {
if (l == null && r == null) {
true
} else if (l == null || r == null) {
false
} else {
l == r
}
}

override def toString: String = {
"CASE" + branches.sliding(2, 2).map {
s"CASE $key" + branches.sliding(2, 2).map {
case Seq(cond, value) => s" WHEN $cond THEN $value"
case Seq(elseValue) => s" ELSE $elseValue"
}.mkString
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -850,6 +850,32 @@ class ExpressionEvaluationSuite extends ExpressionEvaluationBaseSuite {
assert(CaseWhen(Seq(c2, c4_notNull, c3, c5)).nullable === true)
}

test("case key when") {
val row = create_row(null, 1, 2, "a", "b", "c")
val c1 = 'a.int.at(0)
val c2 = 'a.int.at(1)
val c3 = 'a.int.at(2)
val c4 = 'a.string.at(3)
val c5 = 'a.string.at(4)
val c6 = 'a.string.at(5)

val literalNull = Literal.create(null, BooleanType)
val literalInt = Literal(1)
val literalString = Literal("a")

checkEvaluation(CaseKeyWhen(c1, Seq(c2, c4, c5)), "b", row)
checkEvaluation(CaseKeyWhen(c1, Seq(c2, c4, literalNull, c5, c6)), "b", row)
checkEvaluation(CaseKeyWhen(c2, Seq(literalInt, c4, c5)), "a", row)
checkEvaluation(CaseKeyWhen(c2, Seq(c1, c4, c5)), "b", row)
checkEvaluation(CaseKeyWhen(c4, Seq(literalString, c2, c3)), 1, row)
checkEvaluation(CaseKeyWhen(c4, Seq(c1, c3, c5, c2, Literal(3))), 3, row)

checkEvaluation(CaseKeyWhen(literalInt, Seq(c2, c4, c5)), "a", row)
checkEvaluation(CaseKeyWhen(literalString, Seq(c5, c2, c4, c3)), 2, row)
checkEvaluation(CaseKeyWhen(literalInt, Seq(c5, c2, c4, c3)), null, row)
checkEvaluation(CaseKeyWhen(literalNull, Seq(c5, c2, c1, c3)), 2, row)
}

test("complex type") {
val row = create_row(
"^Ba*n", // 0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -357,11 +357,12 @@ final class DataFrameNaFunctions private[sql](df: DataFrame) {
* TODO: This can be optimized to use broadcast join when replacementMap is large.
*/
private def replaceCol(col: StructField, replacementMap: Map[_, _]): Column = {
val branches: Seq[Expression] = replacementMap.flatMap { case (source, target) =>
df.col(col.name).equalTo(lit(source).cast(col.dataType)).expr ::
lit(target).cast(col.dataType).expr :: Nil
val keyExpr = df.col(col.name).expr
def buildExpr(v: Any) = Cast(Literal(v), keyExpr.dataType)
val branches = replacementMap.flatMap { case (source, target) =>
Seq(buildExpr(source), buildExpr(target))
}.toSeq
new Column(CaseWhen(branches ++ Seq(df.col(col.name).expr))).as(col.name)
new Column(CaseKeyWhen(keyExpr, branches :+ keyExpr)).as(col.name)
}

private def convertToDouble(v: Any): Double = v match {
Expand Down
12 changes: 2 additions & 10 deletions sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala
Original file line number Diff line number Diff line change
Expand Up @@ -1246,16 +1246,8 @@ https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C
case Token("TOK_FUNCTION", Token(WHEN(), Nil) :: branches) =>
CaseWhen(branches.map(nodeToExpr))
case Token("TOK_FUNCTION", Token(CASE(), Nil) :: branches) =>
val transformed = branches.drop(1).sliding(2, 2).map {
case Seq(condVal, value) =>
// FIXME (SPARK-2155): the key will get evaluated for multiple times in CaseWhen's eval().
// Hence effectful / non-deterministic key expressions are *not* supported at the moment.
// We should consider adding new Expressions to get around this.
Seq(EqualTo(nodeToExpr(branches(0)), nodeToExpr(condVal)),
nodeToExpr(value))
case Seq(elseVal) => Seq(nodeToExpr(elseVal))
}.toSeq.reduce(_ ++ _)
CaseWhen(transformed)
val keyExpr = nodeToExpr(branches.head)
CaseKeyWhen(keyExpr, branches.drop(1).map(nodeToExpr))

/* Complex datatype manipulation */
case Token("[", child :: ordinal :: Nil) =>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -751,4 +751,11 @@ class SQLQuerySuite extends QueryTest {
(6, "c", 0, 6)
).map(i => Row(i._1, i._2, i._3, i._4)))
}

test("test case key when") {
(1 to 5).map(i => (i, i.toString)).toDF("k", "v").registerTempTable("t")
checkAnswer(
sql("SELECT CASE k WHEN 2 THEN 22 WHEN 4 THEN 44 ELSE 0 END, v FROM t"),
Row(0, "1") :: Row(22, "2") :: Row(0, "3") :: Row(44, "4") :: Row(0, "5") :: Nil)
}
}