Skip to content

Commit 215c225

Browse files
committed
clarify retry docs
1 parent 9fb74e2 commit 215c225

File tree

1 file changed

+4
-4
lines changed
  • sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer

1 file changed

+4
-4
lines changed

sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataWriter.java

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@
3838
* succeeds), a {@link WriterCommitMessage} will be sent to the driver side and pass to
3939
* {@link DataSourceWriter#commit(WriterCommitMessage[])} with commit messages from other data
4040
* writers. If this data writer fails(one record fails to write or {@link #commit()} fails), an
41-
* exception will be sent to the driver side, and Spark may retry this writing task for some times,
42-
* each time {@link DataWriterFactory#createDataWriter(int, int, long)} gets a different
43-
* `attemptNumber`, and finally call {@link DataSourceWriter#abort(WriterCommitMessage[])} if all
44-
* retry fail.
41+
* exception will be sent to the driver side, and Spark may retry this writing task a few times.
42+
* In each retry, {@link DataWriterFactory#createDataWriter(int, int, long)} will receive a
43+
* different `attemptNumber`. Spark will call {@link DataSourceWriter#abort(WriterCommitMessage[])}
44+
* when the configured number of retries is exhausted.
4545
*
4646
* Besides the retry mechanism, Spark may launch speculative tasks if the existing writing task
4747
* takes too long to finish. Different from retried tasks, which are launched one by one after the

0 commit comments

Comments
 (0)