Skip to content

Conversation

@zhengruifeng
Copy link
Contributor

@zhengruifeng zhengruifeng commented Jun 24, 2025

What changes were proposed in this pull request?

Upgrade the minimum version of Python to 3.10

Why are the changes needed?

Python 3.9 is reaching its EOL

Does this PR introduce any user-facing change?

yes, doc change

How was this patch tested?

PR builder with upgraded image

https://github.com/zhengruifeng/spark/actions/runs/16064529566/job/45340924656

Was this patch authored or co-authored using generative AI tooling?

No

Copy link
Member

@HyukjinKwon HyukjinKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would need to fix more places below. Can be done in a separate PR

.github/workflows/build_infra_images_cache.yml:      - name: Build and push (PySpark with Python 3.9)
.github/workflows/build_infra_images_cache.yml:      - name: Image digest (PySpark with Python 3.9)
.github/workflows/build_python_3.9.yml:name: "Build / Python-only (master, Python 3.9)"
dev/create-release/spark-rm/Dockerfile:# Install Python 3.9
dev/infra/Dockerfile:# Install Python 3.9
dev/spark-test-image/python-309/Dockerfile:LABEL org.opencontainers.image.ref.name="Apache Spark Infra Image For PySpark with Python 3.09"
dev/spark-test-image/python-309/Dockerfile:# Install Python 3.9
dev/spark-test-image/python-309/Dockerfile:# Python deps for Spark Connect
dev/spark-test-image/python-309/Dockerfile:# Install Python 3.9 packages
dev/spark-test-image/python-minimum/Dockerfile:# Install Python 3.9
dev/spark-test-image/python-minimum/Dockerfile:# Install Python 3.9 packages
dev/spark-test-image/python-ps-minimum/Dockerfile:# Install Python 3.9
dev/spark-test-image/python-ps-minimum/Dockerfile:# Install Python 3.9 packages
docs/index.md:Spark runs on Java 17/21, Scala 2.13, Python 3.9+, and R 3.5+ (Deprecated).
docs/rdd-programming-guide.md:Spark {{site.SPARK_VERSION}} works with Python 3.9+. It can use the standard CPython interpreter,
python/docs/source/development/contributing.rst:    # Python 3.9+ is required
python/docs/source/development/contributing.rst:With Python 3.9+, pip can be used as below to install and set up the development environment.
python/docs/source/getting_started/install.rst:Python 3.9 and above.
python/docs/source/tutorial/pandas_on_spark/typehints.rst:With Python 3.9+, you can specify the type hints by using pandas instances as follows:
python/packaging/classic/setup.py:            "Programming Language :: Python :: 3.9",
python/packaging/client/setup.py:            "Programming Language :: Python :: 3.9",
python/packaging/connect/setup.py:            "Programming Language :: Python :: 3.9",
python/pyspark/cloudpickle/cloudpickle.py:        # "nogil" Python: modified attributes from 3.9
python/pyspark/pandas/typedef/typehints.py:# TODO: Remove this variadic-generic hack by tuple once ww drop Python up to 3.9.
python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:        # SPARK-30921: We should not pushdown predicates of PythonUDFs through Aggregate.
python/pyspark/sql/udf.py:    # Note: Python 3.9.15, Pandas 1.5.2 and PyArrow 10.0.1 are used.
python/run-tests:  echo "Python versions prior to 3.9 are not supported."
.github/workflows/build_and_test.yml:            python3.9 ./dev/structured_logging_style.py
.github/workflows/build_and_test.yml:        python3.9 -m pip install 'flake8==3.9.0' pydata_sphinx_theme 'mypy==0.982' 'pytest==7.1.3' 'pytest-mypy-plugins==1.9.3' numpydoc 'jinja2<3.0.0' 'black==22.6.0'
.github/workflows/build_and_test.yml:        python3.9 -m pip install 'pandas-stubs==1.2.0.53' ipython 'grpcio==1.56.0' 'grpc-stubs==1.24.11' 'googleapis-common-protos-stubs==2.2.0'
.github/workflows/build_and_test.yml:      run: python3.9 -m pip list
.github/workflows/build_and_test.yml:      run: PYTHON_EXECUTABLE=python3.9 ./dev/lint-python
.github/workflows/build_and_test.yml:        python3.9 -m pip install 'protobuf==4.25.1' 'mypy-protobuf==3.3.0'
.github/workflows/build_and_test.yml:      run: if test -f ./dev/connect-check-protos.py; then PATH=$PATH:$HOME/buf/bin PYTHON_EXECUTABLE=python3.9 ./dev/connect-check-protos.py; fi
.github/workflows/build_and_test.yml:      PYSPARK_DRIVER_PYTHON: python3.9
.github/workflows/build_and_test.yml:      PYSPARK_PYTHON: python3.9
.github/workflows/build_and_test.yml:        python3.9 -m pip install 'sphinx==4.5.0' mkdocs 'pydata_sphinx_theme>=0.13' sphinx-copybutton nbsphinx numpydoc jinja2 markupsafe 'pyzmq<24.0.0' 'sphinxcontrib-applehelp==1.0.4' 'sphinxcontrib-devhelp==1.0.2' 'sphinxcontrib-htmlhelp==2.0.1' 'sphinxcontrib-qthelp==1.0.3' 'sphinxcontrib-serializinghtml==1.1.5'
.github/workflows/build_and_test.yml:        python3.9 -m pip install ipython_genutils # See SPARK-38517
.github/workflows/build_and_test.yml:        python3.9 -m pip install sphinx_plotly_directive 'numpy>=1.20.0' pyarrow pandas 'plotly<6.0.0'
.github/workflows/build_and_test.yml:        python3.9 -m pip install 'docutils<0.18.0' # See SPARK-39421
.github/workflows/build_and_test.yml:      run: python3.9 -m pip list
.github/workflows/build_and_test.yml:        # We need this link to make sure `python3` points to `python3.9` which contains the prerequisite packages.
.github/workflows/build_and_test.yml:        ln -s "$(which python3.9)" "/usr/local/bin/python3"
.github/workflows/build_and_test.yml:          pyspark_modules=`cd dev && python3.9 -c "import sparktestsupport.modules as m; print(','.join(m.name for m in m.all_modules if m.name.startswith('pyspark')))"`
.github/workflows/build_infra_images_cache.yml:    - 'dev/spark-test-image/python-309/Dockerfile'
.github/workflows/build_infra_images_cache.yml:        if: hashFiles('dev/spark-test-image/python-309/Dockerfile') != ''
.github/workflows/build_infra_images_cache.yml:        id: docker_build_pyspark_python_309
.github/workflows/build_infra_images_cache.yml:          context: ./dev/spark-test-image/python-309/
.github/workflows/build_infra_images_cache.yml:          tags: ghcr.io/apache/spark/apache-spark-github-action-image-pyspark-python-309-cache:${{ github.ref_name }}-static
.github/workflows/build_infra_images_cache.yml:          cache-from: type=registry,ref=ghcr.io/apache/spark/apache-spark-github-action-image-pyspark-python-309-cache:${{ github.ref_name }}
.github/workflows/build_infra_images_cache.yml:          cache-to: type=registry,ref=ghcr.io/apache/spark/apache-spark-github-action-image-pyspark-python-309-cache:${{ github.ref_name }},mode=max
.github/workflows/build_infra_images_cache.yml:        if: hashFiles('dev/spark-test-image/python-309/Dockerfile') != ''
.github/workflows/build_infra_images_cache.yml:        run: echo ${{ steps.docker_build_pyspark_python_309.outputs.digest }}
.github/workflows/build_python_3.9.yml:          "PYSPARK_IMAGE_TO_TEST": "python-309",
.github/workflows/build_python_3.9.yml:          "PYTHON_TO_TEST": "python3.9"
.github/workflows/build_python_minimum.yml:          "PYTHON_TO_TEST": "python3.9"
.github/workflows/build_python_ps_minimum.yml:          "PYTHON_TO_TEST": "python3.9"
README.md:|            | [![GitHub Actions Build](https://github.com/apache/spark/actions/workflows/build_python_3.9.yml/badge.svg)](https://github.com/apache/spark/actions/workflows/build_python_3.9.yml)                             |
dev/create-release/spark-rm/Dockerfile:    python3.9 python3.9-distutils \
dev/create-release/spark-rm/Dockerfile:RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.9
dev/create-release/spark-rm/Dockerfile:RUN python3.9 -m pip install --ignore-installed blinker>=1.6.2 # mlflow needs this
dev/create-release/spark-rm/Dockerfile:RUN python3.9 -m pip install --force $BASIC_PIP_PKGS unittest-xml-reporting $CONNECT_PIP_PKGS && \
dev/create-release/spark-rm/Dockerfile:    python3.9 -m pip install 'torch<2.6.0' torchvision --index-url https://download.pytorch.org/whl/cpu && \
dev/create-release/spark-rm/Dockerfile:    python3.9 -m pip install torcheval && \
dev/create-release/spark-rm/Dockerfile:    python3.9 -m pip cache purge
dev/create-release/spark-rm/Dockerfile:RUN python3.9 -m pip install 'sphinx==4.5.0' mkdocs 'pydata_sphinx_theme>=0.13' sphinx-copybutton nbsphinx numpydoc jinja2 markupsafe 'pyzmq<24.0.0' \
dev/create-release/spark-rm/Dockerfile:RUN python3.9 -m pip list
dev/create-release/spark-rm/Dockerfile:RUN ln -s "$(which python3.9)" "/usr/local/bin/python"
dev/create-release/spark-rm/Dockerfile:RUN ln -s "$(which python3.9)" "/usr/local/bin/python3"
dev/infra/Dockerfile:    python3.9 python3.9-distutils \
dev/infra/Dockerfile:RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.9
dev/infra/Dockerfile:RUN python3.9 -m pip install --ignore-installed blinker>=1.6.2 # mlflow needs this
dev/infra/Dockerfile:RUN python3.9 -m pip install --force $BASIC_PIP_PKGS unittest-xml-reporting $CONNECT_PIP_PKGS && \
dev/infra/Dockerfile:    python3.9 -m pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu && \
dev/infra/Dockerfile:    python3.9 -m pip install torcheval && \
dev/infra/Dockerfile:    python3.9 -m pip cache purge
dev/spark-test-image-util/docs/run-in-container:# We need this link to make sure `python3` points to `python3.9` which contains the prerequisite packages.
dev/spark-test-image-util/docs/run-in-container:ln -s "$(which python3.9)" "/usr/local/bin/python3"
dev/spark-test-image/python-309/Dockerfile:    libpython3-dev \
dev/spark-test-image/python-309/Dockerfile:    python3.9 \
dev/spark-test-image/python-309/Dockerfile:    python3.9-distutils \
dev/spark-test-image/python-309/Dockerfile:RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.9
dev/spark-test-image/python-309/Dockerfile:RUN python3.9 -m pip install --ignore-installed blinker>=1.6.2 # mlflow needs this
dev/spark-test-image/python-309/Dockerfile:RUN python3.9 -m pip install --force $BASIC_PIP_PKGS unittest-xml-reporting $CONNECT_PIP_PKGS && \
dev/spark-test-image/python-309/Dockerfile:    python3.9 -m pip install 'torch<2.6.0' torchvision --index-url https://download.pytorch.org/whl/cpu && \
dev/spark-test-image/python-309/Dockerfile:    python3.9 -m pip install torcheval && \
dev/spark-test-image/python-309/Dockerfile:    python3.9 -m pip cache purge
dev/spark-test-image/python-minimum/Dockerfile:    python3.9 \
dev/spark-test-image/python-minimum/Dockerfile:    python3.9-distutils \
dev/spark-test-image/python-minimum/Dockerfile:RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.9
dev/spark-test-image/python-minimum/Dockerfile:RUN python3.9 -m pip install --force $BASIC_PIP_PKGS $CONNECT_PIP_PKGS && \
dev/spark-test-image/python-minimum/Dockerfile:    python3.9 -m pip cache purge
dev/spark-test-image/python-ps-minimum/Dockerfile:    python3.9 \
dev/spark-test-image/python-ps-minimum/Dockerfile:    python3.9-distutils \
dev/spark-test-image/python-ps-minimum/Dockerfile:RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.9
dev/spark-test-image/python-ps-minimum/Dockerfile:RUN python3.9 -m pip install --force $BASIC_PIP_PKGS $CONNECT_PIP_PKGS && \
dev/spark-test-image/python-ps-minimum/Dockerfile:    python3.9 -m pip cache purge
python/docs/source/development/contributing.rst:    conda create --name pyspark-dev-env python=3.9
python/docs/source/getting_started/install.rst:    conda install -c conda-forge pyspark  # can also add "python=3.9 some_package [etc.]" here
python/packaging/classic/setup.py:        python_requires=">=3.9",
python/packaging/client/setup.py:        python_requires=">=3.9",
python/packaging/connect/setup.py:        python_requires=">=3.9",

@xinrong-meng
Copy link
Member

Spark in yarn-client mode seems failing

@zhengruifeng zhengruifeng marked this pull request as draft June 25, 2025 03:17
@zhengruifeng
Copy link
Contributor Author

convert to draft since jobs gets stuck

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this.

Copy link
Contributor

@allisonwang-db allisonwang-db left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix 🎉

@zhengruifeng zhengruifeng force-pushed the py_min_310 branch 2 times, most recently from 2f06a3e to fa6dc2d Compare July 3, 2025 08:39
Copy link
Contributor Author

@zhengruifeng zhengruifeng Jul 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previous numpy==1.21 pandas==2.0.0 no longer works with python 3.10, after testing a few combinations, I think we need to also upgrade them

dongjoon-hyun added a commit that referenced this pull request Nov 6, 2025
### What changes were proposed in this pull request?

This PR aims to remove `Python 3.9` from `Spark Connect`.

### Why are the changes needed?

`Python 3.9` reached the end-of-life on 2025-10-31.
- https://devguide.python.org/versions/#unsupported-versions

Apache Spark 4.1.0 dropped `Python 3.9` support and we don't have a test coverage. We had better make it clear even that is `Spark Connect` module.
- #51259
- #51416
- #51631
- #51371

### Does this PR introduce _any_ user-facing change?

No behavior change for Python 3.10+ users.

### How was this patch tested?

Pass the CIs and manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #52911 from dongjoon-hyun/SPARK-54213.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun added a commit that referenced this pull request Nov 6, 2025
### What changes were proposed in this pull request?

This PR aims to remove `Python 3.9` from `Spark Connect`.

### Why are the changes needed?

`Python 3.9` reached the end-of-life on 2025-10-31.
- https://devguide.python.org/versions/#unsupported-versions

Apache Spark 4.1.0 dropped `Python 3.9` support and we don't have a test coverage. We had better make it clear even that is `Spark Connect` module.
- #51259
- #51416
- #51631
- #51371

### Does this PR introduce _any_ user-facing change?

No behavior change for Python 3.10+ users.

### How was this patch tested?

Pass the CIs and manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #52911 from dongjoon-hyun/SPARK-54213.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 4ba1189)
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun added a commit that referenced this pull request Nov 10, 2025
…emoving Python 3.9+ condition

### What changes were proposed in this pull request?

This PR aims to simplify `pyspark/util.py` doctests by removing Python 3.9+ condition.

### Why are the changes needed?

Since Apache Spark 4.1.0 dropped `Python 3.9` support, we can assume that this doctests are Python 3.10+ always.
- #51416
- #51259
- #52974

### Does this PR introduce _any_ user-facing change?

No, this is a test change.

### How was this patch tested?

Pass the CIs.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #52975 from dongjoon-hyun/SPARK-54278.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun added a commit that referenced this pull request Nov 10, 2025
…emoving Python 3.9+ condition

### What changes were proposed in this pull request?

This PR aims to simplify `pyspark/util.py` doctests by removing Python 3.9+ condition.

### Why are the changes needed?

Since Apache Spark 4.1.0 dropped `Python 3.9` support, we can assume that this doctests are Python 3.10+ always.
- #51416
- #51259
- #52974

### Does this PR introduce _any_ user-facing change?

No, this is a test change.

### How was this patch tested?

Pass the CIs.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #52975 from dongjoon-hyun/SPARK-54278.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit dbc643e)
Signed-off-by: Dongjoon Hyun <[email protected]>
zifeif2 pushed a commit to zifeif2/spark that referenced this pull request Nov 22, 2025
### What changes were proposed in this pull request?

This PR aims to remove `Python 3.9` from `Spark Connect`.

### Why are the changes needed?

`Python 3.9` reached the end-of-life on 2025-10-31.
- https://devguide.python.org/versions/#unsupported-versions

Apache Spark 4.1.0 dropped `Python 3.9` support and we don't have a test coverage. We had better make it clear even that is `Spark Connect` module.
- apache#51259
- apache#51416
- apache#51631
- apache#51371

### Does this PR introduce _any_ user-facing change?

No behavior change for Python 3.10+ users.

### How was this patch tested?

Pass the CIs and manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52911 from dongjoon-hyun/SPARK-54213.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
zifeif2 pushed a commit to zifeif2/spark that referenced this pull request Nov 22, 2025
…emoving Python 3.9+ condition

### What changes were proposed in this pull request?

This PR aims to simplify `pyspark/util.py` doctests by removing Python 3.9+ condition.

### Why are the changes needed?

Since Apache Spark 4.1.0 dropped `Python 3.9` support, we can assume that this doctests are Python 3.10+ always.
- apache#51416
- apache#51259
- apache#52974

### Does this PR introduce _any_ user-facing change?

No, this is a test change.

### How was this patch tested?

Pass the CIs.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52975 from dongjoon-hyun/SPARK-54278.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit to huangxiaopingRD/spark that referenced this pull request Nov 25, 2025
### What changes were proposed in this pull request?

This PR aims to remove `Python 3.9` from `Spark Connect`.

### Why are the changes needed?

`Python 3.9` reached the end-of-life on 2025-10-31.
- https://devguide.python.org/versions/#unsupported-versions

Apache Spark 4.1.0 dropped `Python 3.9` support and we don't have a test coverage. We had better make it clear even that is `Spark Connect` module.
- apache#51259
- apache#51416
- apache#51631
- apache#51371

### Does this PR introduce _any_ user-facing change?

No behavior change for Python 3.10+ users.

### How was this patch tested?

Pass the CIs and manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52911 from dongjoon-hyun/SPARK-54213.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit to huangxiaopingRD/spark that referenced this pull request Nov 25, 2025
…emoving Python 3.9+ condition

### What changes were proposed in this pull request?

This PR aims to simplify `pyspark/util.py` doctests by removing Python 3.9+ condition.

### Why are the changes needed?

Since Apache Spark 4.1.0 dropped `Python 3.9` support, we can assume that this doctests are Python 3.10+ always.
- apache#51416
- apache#51259
- apache#52974

### Does this PR introduce _any_ user-facing change?

No, this is a test change.

### How was this patch tested?

Pass the CIs.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52975 from dongjoon-hyun/SPARK-54278.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 30, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 30, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 31, 2025
baibaichen added a commit to apache/incubator-gluten that referenced this pull request Dec 31, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 31, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 31, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 4, 2026
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 4, 2026
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 5, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark*/.../QueryExecutionShim.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala<br>shims/spark*/.../v2/DataSourceV2RelationShim.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 5, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark*/.../QueryExecutionShim.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala<br>shims/spark*/.../v2/DataSourceV2RelationShim.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 6, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 6, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 6, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
baibaichen added a commit to baibaichen/gluten that referenced this pull request Jan 7, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
baibaichen added a commit to apache/incubator-gluten that referenced this pull request Jan 7, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
QCLyu pushed a commit to QCLyu/incubator-gluten that referenced this pull request Jan 8, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
QCLyu pushed a commit to QCLyu/incubator-gluten that referenced this pull request Jan 8, 2026
| Cause | Type | Category | Description | Affected Files |
|-------|------|----------|-------------|----------------|
| - | Feat | Feature | Introduce Spark41Shims and update build configuration to support Spark 4.1. | pom.xml<br>shims/pom.xml<br>shims/spark41/pom.xml<br>shims/spark41/.../META-INF/services/org.apache.gluten.sql.shims.SparkShimProvider<br>shims/spark41/.../spark41/Spark41Shims.scala<br>shims/spark41/.../spark41/SparkShimProvider.scala |
| [#51477](apache/spark#51477) | Fix | Compatibility | Use class name instead of class object for streaming call detection to ensure Spark 4.1 compatibility. | gluten-core/.../caller/CallerInfo.scala |
| [#50852](apache/spark#50852) | Fix | Compatibility | Add printOutputColumns parameter to generateTreeString methods | shims/spark41/.../GenerateTreeStringShim.scala |
| [#51775](apache/spark#51775) | Fix | Compatibility | Remove unused MDC import in FileSourceScanExecShim.scala | shims/spark41/.../FileSourceScanExecShim.scala |
| [#51979](apache/spark#51979) | Fix | Compatibility | Add missing StoragePartitionJoinParams import in BatchScanExecShim and AbstractBatchScanExec | shims/spark41/.../v2/AbstractBatchScanExec.scala<br>shims/spark41/.../v2/BatchScanExecShim.scala |
| [#51302](apache/spark#51302) | Fix | Compatibility | Remove TimeAdd from ExpressionConverter and ExpressionMappings for test | gluten-substrait/.../ExpressionConverter.scala<br>gluten-substrait/.../ExpressionMappings.scala |
| [#50598](apache/spark#50598) | Fix | Compatibility | Adapt to QueryExecution.createSparkPlan interface change | gluten-substrait/.../GlutenImplicits.scala<br>shims/spark\*/.../shims/spark\*/Spark*Shims.scala |
| [#52599](apache/spark#52599) | Fix | Compatibility | Adapt to DataSourceV2Relation interface change | backends-velox/.../ArrowConvertorRule.scala |
| [#52384](apache/spark#52384) | Fix | Compatibility | Using new interface of ParquetFooterReader | backends-velox/.../ParquetMetadataUtils.scala<br>gluten-ut/spark40/.../parquet/GlutenParquetRowIndexSuite.scala<br>shims/spark*/.../parquet/ParquetFooterReaderShim.scala |
| [#52509](apache/spark#52509) | Fix | Build | Update Scala version to 2.13.17 in pom.xml to fix `java.lang.NoSuchMethodError: 'java.lang.String scala.util.hashing.MurmurHash3$.caseClassHash$default$2()'` | pom.xml |
| - | Fix | Test | Refactor Spark version checks in VeloxHashJoinSuite to improve readability and maintainability | backends-velox/.../VeloxHashJoinSuite.scala |
| [#50849](apache/spark#50849) | Fix | Test | Fix MiscOperatorSuite to support OneRowRelationExec plan Spark 4.1 | backends-velox/.../MiscOperatorSuite.scala |
| [#52723](apache/spark#52723) | Fix | Compatibility | Add GeographyVal and GeometryVal support in ColumnarArrayShim | shims/spark41/.../vectorized/ColumnarArrayShim.java |
| [#48470](apache/spark#48470) | 4.1.0 | Exclude | Exclude split test in VeloxStringFunctionsSuite | backends-velox/.../VeloxStringFunctionsSuite.scala |
| [#51259](apache/spark#51259) | 4.1.0 | Exclude | Only Run ArrowEvalPythonExecSuite tests up to Spark 4.0, we need update ci python to 3.10 | backends-velox/.../python/ArrowEvalPythonExecSuite.scala |
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants