Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
7dec5eb
[SPARK-47705][INFRA] Sort LogKey alphabetically and build a test to e…
dtenedor Apr 3, 2024
6a0555c
[SPARK-47700][SQL] Fix formatting of error messages with treeNode
jchen5 Apr 3, 2024
49eefc5
[SPARK-47722][SS] Wait until RocksDB background work finish before cl…
WweiL Apr 3, 2024
fbe6b1d
[SPARK-47721][DOC] Guidelines for the Structured Logging Framework
gengliangwang Apr 3, 2024
e3aab8c
[SPARK-47210][SQL] Addition of implicit casting without indeterminate…
mihailomilosevic2001 Apr 3, 2024
d87ac8e
[SPARK-47708][CONNECT] Do not log gRPC exception to stderr in PySpark
nemanja-boric-databricks Apr 4, 2024
447f8af
[SPARK-47720][CORE] Update `spark.speculation.multiplier` to 3 and `s…
dongjoon-hyun Apr 4, 2024
678aeb7
[SPARK-47683][PYTHON][BUILD] Decouple PySpark core API to pyspark.cor…
HyukjinKwon Apr 4, 2024
c25fd93
[SPARK-47705][INFRA][FOLLOWUP] Sort LogKey alphabetically and build a…
panbingkun Apr 4, 2024
d272a1b
[SPARK-47724][PYTHON][TESTS] Add an environment variable for testing …
HyukjinKwon Apr 4, 2024
d75c775
[SPARK-46812][PYTHON][TESTS][FOLLOWUP] Skip `pandas`-required tests i…
dongjoon-hyun Apr 4, 2024
3f6ac60
[SPARK-47577][CORE][PART1] Migrate logError with variables to structu…
gengliangwang Apr 4, 2024
f6999df
[SPARK-47081][CONNECT] Support Query Execution Progress
grundprinzip Apr 4, 2024
bffb02d
[SPARK-47565][PYTHON] PySpark worker pool crash resilience
Apr 4, 2024
3b8aea3
Revert "[SPARK-47708][CONNECT] Do not log gRPC exception to stderr in…
nemanja-boric-databricks Apr 4, 2024
5f9f5db
[SPARK-47689][SQL][FOLLOWUP] More accurate file path in TASK_WRITE_FA…
cloud-fan Apr 4, 2024
5ca3467
[SPARK-47729][PYTHON][TESTS] Get the proper default port for pyspark-…
HyukjinKwon Apr 4, 2024
25fc67f
[SPARK-47728][DOC] Document G1 Concurrent GC metrics
LucaCanali Apr 4, 2024
e3405c1
[SPARK-47610][CONNECT][FOLLOWUP] Add -Dio.netty.tryReflectionSetAcces…
pan3793 Apr 4, 2024
3fd0cd6
[SPARK-47598][CORE] MLLib: Migrate logError with variables to structu…
panbingkun Apr 4, 2024
240923c
[SPARK-46812][PYTHON][TESTS][FOLLOWUP] Check should_test_connect and …
dongjoon-hyun Apr 4, 2024
fb96b1a
[SPARK-47723][CORE][TESTS] Introduce a tool that can sort alphabetica…
panbingkun Apr 5, 2024
404d58c
[SPARK-47081][CONNECT][FOLLOW-UP] Add the `shell` module into PyPI pa…
HyukjinKwon Apr 5, 2024
b9ca91d
[SPARK-47712][CONNECT] Allow connect plugins to create and process Da…
tomvanbussel Apr 5, 2024
0107435
[SPARK-47734][PYTHON][TESTS] Fix flaky DataFrame.writeStream doctest …
JoshRosen Apr 5, 2024
d5620cb
[SPARK-47289][SQL] Allow extensions to log extended information in ex…
parthchandra Apr 5, 2024
aeb082e
[SPARK-47081][CONNECT][TESTS][FOLLOW-UP] Skip the flaky doctests for now
HyukjinKwon Apr 5, 2024
97e63ff
[SPARK-47735][PYTHON][TESTS] Make pyspark.testing.connectutils compat…
HyukjinKwon Apr 5, 2024
12d0367
[SPARK-47724][PYTHON][TESTS][FOLLOW-UP] Make testing script to inheri…
HyukjinKwon Apr 5, 2024
6bd0ccf
[SPARK-47511][SQL][FOLLOWUP] Rename the config REPLACE_NULLIF_USING_W…
cloud-fan Apr 5, 2024
c34baeb
[SPARK-47719][SQL] Change spark.sql.legacy.timeParserPolicy default t…
srielau Apr 5, 2024
18072b5
[SPARK-47577][CORE][PART2] Migrate logError with variables to structu…
gengliangwang Apr 5, 2024
1efbf43
[SPARK-47310][SS] Add micro-benchmark for merge operations for multip…
anishshri-db Apr 5, 2024
d1ace24
[SPARK-47582][SQL] Migrate Catalyst logInfo with variables to structu…
dtenedor Apr 5, 2024
11abc64
[SPARK-47094][SQL] SPJ : Dynamically rebalance number of buckets when…
szehon-ho Apr 6, 2024
42dc815
[SPARK-47743][CORE] Use milliseconds as the time unit in logging
gengliangwang Apr 6, 2024
7385f19
[SPARK-47592][CORE] Connector module: Migrate logError with variables…
panbingkun Apr 6, 2024
d69df59
[SPARK-47738][BUILD] Upgrade Kafka to 3.7.0
panbingkun Apr 6, 2024
60a3fbc
[SPARK-47727][PYTHON] Make SparkConf to root level to for both SparkS…
HyukjinKwon Apr 6, 2024
644687b
[SPARK-47709][BUILD] Upgrade tink to 1.13.0
LuciferYang Apr 6, 2024
4d9dbb3
[SPARK-46722][CONNECT][SS][TESTS][FOLLOW-UP] Drop the tables after te…
HyukjinKwon Apr 7, 2024
c11585a
[SPARK-47751][PYTHON][CONNECT] Make pyspark.worker_utils compatible w…
HyukjinKwon Apr 7, 2024
d743012
[SPARK-47753][PYTHON][CONNECT][TESTS] Make pyspark.testing compatible…
HyukjinKwon Apr 7, 2024
f7dff4a
[SPARK-47752][PS][CONNECT] Make pyspark.pandas compatible with pyspar…
HyukjinKwon Apr 7, 2024
e92e8f5
[SPARK-47744] Add support for negative-valued bytes in range encoder
neilramaswamy Apr 7, 2024
0c992b2
[SPARK-47755][CONNECT] Pivot should fail when the number of distinct …
zhengruifeng Apr 7, 2024
b299b2b
[SPARK-47299][PYTHON][DOCS] Use the same `versions.json` in the dropd…
panbingkun Apr 8, 2024
cc6c0eb
[MINOR][TESTS] Deduplicate test cases `test_parse_datatype_string`
HyukjinKwon Apr 8, 2024
ad2367c
[MINOR][PYTHON][SS][TESTS] Drop the tables after being used at `test_…
HyukjinKwon Apr 8, 2024
f576b85
[SPARK-47541][SQL] Collated strings in complex types supporting opera…
nikolamand-db Apr 8, 2024
d55bb61
[SPARK-47558][SS] State TTL support for ValueState
sahnib Apr 8, 2024
3a39ac2
[SPARK-47713][SQL][CONNECT] Fix a self-join failure
zhengruifeng Apr 8, 2024
eb8e997
[SPARK-47657][SQL] Implement collation filter push down support per f…
stefankandic Apr 8, 2024
f0d8f82
[SPARK-47750][DOCS][SQL] Postgres: Document Mapping Spark SQL Data Ty…
yaooqinn Apr 8, 2024
211afd4
[MINOR][PYTHON][CONNECT][TESTS] Enable `MapInPandasParityTests.test_d…
zhengruifeng Apr 8, 2024
f94d95d
[SPARK-47762][PYTHON][CONNECT] Add pyspark.sql.connect.protobuf into …
HyukjinKwon Apr 8, 2024
29d077f
[SPARK-47748][BUILD] Upgrade `zstd-jni` to 1.5.6-2
panbingkun Apr 8, 2024
60806c6
[SPARK-47746] Implement ordinal-based range encoding in the RocksDBSt…
neilramaswamy Apr 8, 2024
134a139
[SPARK-47681][SQL] Add schema_of_variant expression
chenhao-db Apr 8, 2024
abb7b04
[SPARK-47504][SQL] Resolve AbstractDataType simpleStrings for StringT…
mihailomilosevic2001 Apr 8, 2024
91b2331
[WIP] ListStateTTL implementation
ericm-db Apr 8, 2024
479392a
adding log lines
ericm-db Apr 8, 2024
7aab43e
test cases pass
ericm-db Apr 8, 2024
71f960d
spacing
ericm-db Apr 8, 2024
998764c
using NextIterator instead
ericm-db Apr 8, 2024
1dcb7d8
refactor feedback
ericm-db Apr 9, 2024
47867e7
undoing unnecessary change
ericm-db Apr 9, 2024
cfd30c3
refactor get_ttl_value
ericm-db Apr 9, 2024
4a19cb7
refactor test case
ericm-db Apr 9, 2024
993125c
specific doc for clearIfExpired
ericm-db Apr 9, 2024
fd5200f
moving isExpired to common place
ericm-db Apr 9, 2024
d43ffb1
refactoring to use common utils
ericm-db Apr 9, 2024
30f6094
updating interface header
ericm-db Apr 9, 2024
e9376d9
Map State TTL, Initial Commit
ericm-db Apr 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
refactoring to use common utils
  • Loading branch information
ericm-db committed Apr 9, 2024
commit d43ffb165047abb4223e1394fffa9ae07f2988b4
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,23 @@ private[sql] trait ListState[S] extends Serializable {
/** Removes this state for the given grouping key. */
def clear(): Unit
}

/**
* Interface used for arbitrary stateful operations with the v2 API to capture
* list value state.
*/
private[sql] trait ListStateModify[S] extends Serializable {

/** Update the value of the list. */
def put(newState: Array[S]): Unit

/** Append an entry to the list */
def appendValue(newState: S): Unit

/** Append an entire list to the existing value */
def appendList(newState: Array[S]): Unit

/** Removes this state for the given grouping key. */
def clear(): Unit
}

Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import org.apache.spark.internal.Logging
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.execution.streaming.TransformWithStateKeyValueRowSchema.{KEY_ROW_SCHEMA, VALUE_ROW_SCHEMA}
import org.apache.spark.sql.execution.streaming.state.{NoPrefixKeyStateEncoderSpec, StateStore, StateStoreErrors}
import org.apache.spark.sql.execution.streaming.state.{NoPrefixKeyStateEncoderSpec, StateStore}
import org.apache.spark.sql.streaming.ListState

/**
Expand All @@ -44,6 +44,9 @@ class ListStateImpl[S](

private val stateTypesEncoder = StateTypesEncoder(keySerializer, valEncoder, stateName)

private val listStateModifyImpl = new ListStateModifyImpl[S](
store, stateName, keyExprEnc, valEncoder, stateTypesEncoder.encodeValue)

store.createColFamilyIfAbsent(stateName, KEY_ROW_SCHEMA, VALUE_ROW_SCHEMA,
NoPrefixKeyStateEncoderSpec(KEY_ROW_SCHEMA), useMultipleValuesPerKey = true)

Expand Down Expand Up @@ -75,51 +78,21 @@ class ListStateImpl[S](

/** Update the value of the list. */
override def put(newState: Array[S]): Unit = {
validateNewState(newState)

val encodedKey = stateTypesEncoder.encodeGroupingKey()
var isFirst = true

newState.foreach { v =>
val encodedValue = stateTypesEncoder.encodeValue(v)
if (isFirst) {
store.put(encodedKey, encodedValue, stateName)
isFirst = false
} else {
store.merge(encodedKey, encodedValue, stateName)
}
}
listStateModifyImpl.put(newState)
}

/** Append an entry to the list. */
override def appendValue(newState: S): Unit = {
StateStoreErrors.requireNonNullStateValue(newState, stateName)
store.merge(stateTypesEncoder.encodeGroupingKey(),
stateTypesEncoder.encodeValue(newState), stateName)
listStateModifyImpl.appendValue(newState)
}

/** Append an entire list to the existing value. */
override def appendList(newState: Array[S]): Unit = {
validateNewState(newState)

val encodedKey = stateTypesEncoder.encodeGroupingKey()
newState.foreach { v =>
val encodedValue = stateTypesEncoder.encodeValue(v)
store.merge(encodedKey, encodedValue, stateName)
}
listStateModifyImpl.appendList(newState)
}

/** Remove this state. */
override def clear(): Unit = {
store.remove(stateTypesEncoder.encodeGroupingKey(), stateName)
}

private def validateNewState(newState: Array[S]): Unit = {
StateStoreErrors.requireNonNullStateValue(newState, stateName)
StateStoreErrors.requireNonEmptyListStateValue(newState, stateName)

newState.foreach { v =>
StateStoreErrors.requireNonNullStateValue(v, stateName)
}
listStateModifyImpl.clear()
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ package org.apache.spark.sql.execution.streaming
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.execution.streaming.TransformWithStateKeyValueRowSchema.{KEY_ROW_SCHEMA, VALUE_ROW_SCHEMA_WITH_TTL}
import org.apache.spark.sql.execution.streaming.state.{NoPrefixKeyStateEncoderSpec, StateStore, StateStoreErrors}
import org.apache.spark.sql.execution.streaming.state.{NoPrefixKeyStateEncoderSpec, StateStore}
import org.apache.spark.sql.streaming.{ListState, TTLConfig}
import org.apache.spark.util.NextIterator

Expand Down Expand Up @@ -50,6 +50,9 @@ class ListStateImplWithTTL[S](
private val ttlExpirationMs =
StateTTL.calculateExpirationTimeForDuration(ttlConfig.ttlDuration, batchTimestampMs)

private val listStatePutImpl = new ListStateModifyImpl[S](
store, stateName, keyExprEnc, valEncoder, stateTypesEncoder.encodeValue(_, ttlExpirationMs))

initialize()

private def initialize(): Unit = {
Expand Down Expand Up @@ -98,58 +101,25 @@ class ListStateImplWithTTL[S](

/** Update the value of the list. */
override def put(newState: Array[S]): Unit = {
validateNewState(newState)

val encodedKey = stateTypesEncoder.encodeGroupingKey()
var isFirst = true

newState.foreach { v =>
val encodedValue = stateTypesEncoder.encodeValue(v, ttlExpirationMs)
if (isFirst) {
store.put(encodedKey, encodedValue, stateName)
isFirst = false
} else {
store.merge(encodedKey, encodedValue, stateName)
}
}
val serializedGroupingKey = stateTypesEncoder.serializeGroupingKey()
upsertTTLForStateKey(ttlExpirationMs, serializedGroupingKey)
listStatePutImpl.put(newState)
upsertTTLForStateKey()
}

/** Append an entry to the list. */
override def appendValue(newState: S): Unit = {
StateStoreErrors.requireNonNullStateValue(newState, stateName)
store.merge(stateTypesEncoder.encodeGroupingKey(),
stateTypesEncoder.encodeValue(newState, ttlExpirationMs), stateName)
val serializedGroupingKey = stateTypesEncoder.serializeGroupingKey()
upsertTTLForStateKey(ttlExpirationMs, serializedGroupingKey)
listStatePutImpl.appendValue(newState)
upsertTTLForStateKey()
}

/** Append an entire list to the existing value. */
override def appendList(newState: Array[S]): Unit = {
validateNewState(newState)

val encodedKey = stateTypesEncoder.encodeGroupingKey()
newState.foreach { v =>
val encodedValue = stateTypesEncoder.encodeValue(v, ttlExpirationMs)
store.merge(encodedKey, encodedValue, stateName)
}
val serializedGroupingKey = stateTypesEncoder.serializeGroupingKey()
upsertTTLForStateKey(ttlExpirationMs, serializedGroupingKey)
listStatePutImpl.appendList(newState)
upsertTTLForStateKey()
}

/** Remove this state. */
override def clear(): Unit = {
store.remove(stateTypesEncoder.encodeGroupingKey(), stateName)
}

private def validateNewState(newState: Array[S]): Unit = {
StateStoreErrors.requireNonNullStateValue(newState, stateName)
StateStoreErrors.requireNonEmptyListStateValue(newState, stateName)

newState.foreach { v =>
StateStoreErrors.requireNonNullStateValue(v, stateName)
}
listStatePutImpl.clear()
}

/**
Expand All @@ -176,6 +146,11 @@ class ListStateImplWithTTL[S](
}
}

private def upsertTTLForStateKey(): Unit = {
val serializedGroupingKey = stateTypesEncoder.serializeGroupingKey()
upsertTTLForStateKey(ttlExpirationMs, serializedGroupingKey)
}

/*
* Internal methods to probe state for testing. The below methods exist for unit tests
* to read the state ttl values, and ensure that values are persisted correctly in
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.execution.streaming

import org.apache.spark.internal.Logging
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.catalyst.expressions.UnsafeRow
import org.apache.spark.sql.execution.streaming.state.{StateStore, StateStoreErrors}
import org.apache.spark.sql.streaming.ListStateModify

/**
* Provides concrete implementation for list of values associated with a state variable
* used in the streaming transformWithState operator.
*
* @param store - reference to the StateStore instance to be used for storing state
* @param stateName - name of logical state partition
* @param keyExprEnc - Spark SQL encoder for key
* @param valEncoder - Spark SQL encoder for value
* @tparam S - data type of object that will be stored in the list
*/
class ListStateModifyImpl[S](
store: StateStore,
stateName: String,
keyExprEnc: ExpressionEncoder[Any],
valEncoder: Encoder[S],
encodeValue: S => UnsafeRow)
extends ListStateModify[S] with Logging {

private val keySerializer = keyExprEnc.createSerializer()

private val stateTypesEncoder = StateTypesEncoder(keySerializer, valEncoder, stateName)

/** Update the value of the list. */
override def put(newState: Array[S]): Unit = {
validateNewState(newState)

val encodedKey = stateTypesEncoder.encodeGroupingKey()
var isFirst = true

newState.foreach { v =>
val encodedValue = encodeValue(v)
if (isFirst) {
store.put(encodedKey, encodedValue, stateName)
isFirst = false
} else {
store.merge(encodedKey, encodedValue, stateName)
}
}
}

/** Append an entry to the list. */
override def appendValue(newState: S): Unit = {
StateStoreErrors.requireNonNullStateValue(newState, stateName)
store.merge(stateTypesEncoder.encodeGroupingKey(),
encodeValue(newState), stateName)
}

/** Append an entire list to the existing value. */
override def appendList(newState: Array[S]): Unit = {
validateNewState(newState)

val encodedKey = stateTypesEncoder.encodeGroupingKey()
newState.foreach { v =>
val encodedValue = encodeValue(v)
store.merge(encodedKey, encodedValue, stateName)
}
}

/** Remove this state. */
override def clear(): Unit = {
store.remove(stateTypesEncoder.encodeGroupingKey(), stateName)
}

private def validateNewState(newState: Array[S]): Unit = {
StateStoreErrors.requireNonNullStateValue(newState, stateName)
StateStoreErrors.requireNonEmptyListStateValue(newState, stateName)

newState.foreach { v =>
StateStoreErrors.requireNonNullStateValue(v, stateName)
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ class TransformWithListStateTTLSuite extends TransformWithStateTTLTest {
new ListStateTTLProcessor(ttlConfig)
}

test("verify iterator works with expired values in middle of list") {
test("verify iterator works with expired values in middle of list after restarting stream") {
withSQLConf(SQLConf.STATE_STORE_PROVIDER_CLASS.key ->
classOf[RocksDBStateStoreProvider].getName,
SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
Expand Down Expand Up @@ -208,7 +208,7 @@ class TransformWithListStateTTLSuite extends TransformWithStateTTLTest {
}


test("verify iterator works with expired values in beginning of list") {
test("verify iterator works with expired values in beginning of list after restarting stream") {
withSQLConf(SQLConf.STATE_STORE_PROVIDER_CLASS.key ->
classOf[RocksDBStateStoreProvider].getName,
SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
Expand Down Expand Up @@ -297,4 +297,69 @@ class TransformWithListStateTTLSuite extends TransformWithStateTTLTest {
}
}
}

test("verify expired values are evicted from list state") {
withSQLConf(SQLConf.STATE_STORE_PROVIDER_CLASS.key ->
classOf[RocksDBStateStoreProvider].getName,
SQLConf.SHUFFLE_PARTITIONS.key -> "1") {

val inputStream = MemoryStream[InputEvent]
val ttlConfig = TTLConfig(ttlDuration = Duration.ofMinutes(1))
val result = inputStream.toDS()
.groupByKey(x => x.key)
.transformWithState(
getProcessor(ttlConfig),
TimeoutMode.NoTimeouts(),
TTLMode.ProcessingTimeTTL(),
OutputMode.Append())
val clock = new StreamManualClock

testStream(result)(
StartStream(Trigger.ProcessingTime("1 second"), triggerClock = clock),
AddData(inputStream, InputEvent("k1", "put", 1)),
AdvanceManualClock(1 * 1000),
AddData(inputStream, InputEvent("k1", "append", 2)),
AddData(inputStream, InputEvent("k1", "append", 3)),
// advance clock to trigger processing
AdvanceManualClock(1 * 1000),
CheckNewAnswer(),
// get ttl values
AddData(inputStream, InputEvent("k1", "get_ttl_value_from_state", -1, null)),
AdvanceManualClock(1 * 1000),
CheckNewAnswer(
OutputEvent("k1", 1, isTTLValue = true, 61000),
OutputEvent("k1", 2, isTTLValue = true, 62000),
OutputEvent("k1", 3, isTTLValue = true, 62000)
),
AddData(inputStream, InputEvent("k1", "get", -1, null)),
AdvanceManualClock(1 * 1000),
CheckNewAnswer(
OutputEvent("k1", 1, isTTLValue = false, -1),
OutputEvent("k1", 2, isTTLValue = false, -1),
OutputEvent("k1", 3, isTTLValue = false, -1)
),
AdvanceManualClock(45 * 1000),
AddData(inputStream, InputEvent("k1", "append", 4)),
AdvanceManualClock(1 * 1000),
AddData(inputStream, InputEvent("k1", "get_ttl_value_from_state", -1, null)),
AdvanceManualClock(1 * 1000),
CheckNewAnswer(
OutputEvent("k1", 1, isTTLValue = true, 61000),
OutputEvent("k1", 2, isTTLValue = true, 62000),
OutputEvent("k1", 3, isTTLValue = true, 62000),
OutputEvent("k1", 4, isTTLValue = true, 110000)
),
// advance clock so data expires
AdvanceManualClock(30 * 1000),
// run a no data batch
CheckNewAnswer(),
AddData(inputStream, InputEvent("k1", "get_without_enforcing_ttl", -1)),
AdvanceManualClock(1 * 1000),
CheckNewAnswer(
OutputEvent("k1", 4, isTTLValue = false, -1)
),
StopStream
)
}
}
}