File tree Expand file tree Collapse file tree 1 file changed +4
-3
lines changed
sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer Expand file tree Collapse file tree 1 file changed +4
-3
lines changed Original file line number Diff line number Diff line change 2222import org .apache .spark .annotation .InterfaceStability ;
2323
2424/**
25- * A data writer returned by {@link DataWriterFactory#createDataWriter(int, int)} and is
25+ * A data writer returned by {@link DataWriterFactory#createDataWriter(int, int, long )} and is
2626 * responsible for writing data for an input RDD partition.
2727 *
2828 * One Spark task has one exclusive data writer, so there is no thread-safe concern.
3636 * {@link DataSourceWriter#commit(WriterCommitMessage[])} with commit messages from other data
3737 * writers. If this data writer fails(one record fails to write or {@link #commit()} fails), an
3838 * exception will be sent to the driver side, and Spark will retry this writing task for some times,
39- * each time {@link DataWriterFactory#createDataWriter(int, int)} gets a different `attemptNumber`,
40- * and finally call {@link DataSourceWriter#abort(WriterCommitMessage[])} if all retry fail.
39+ * each time {@link DataWriterFactory#createDataWriter(int, int, long)} gets a different
40+ * `attemptNumber`, and finally call {@link DataSourceWriter#abort(WriterCommitMessage[])} if all
41+ * retry fail.
4142 *
4243 * Besides the retry mechanism, Spark may launch speculative tasks if the existing writing task
4344 * takes too long to finish. Different from retried tasks, which are launched one by one after the
You can’t perform that action at this time.
0 commit comments