forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 2
[pull] master from apache:master #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…commissioner ### What changes were proposed in this pull request Fix wrong remained shuffles log in BlockManagerDecommissioner ### Why are the changes needed? BlockManagerDecommissioner should log correct remained shuffles. Current log used all shuffles num as remained. ``` 4 of 24 local shuffles are added. In total, 24 shuffles are remained. 2022-09-30 17:42:15.035 PDT 0 of 24 local shuffles are added. In total, 24 shuffles are remained. 2022-09-30 17:42:45.069 PDT 0 of 24 local shuffles are added. In total, 24 shuffles are remained. ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Manually tested Closes #38078 from warrenzhu25/deco-log. Authored-by: Warren Zhu <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…levant tests in the `yarn` module ### What changes were proposed in this pull request? SPARK-40490 make the test case related to `YarnShuffleIntegrationSuite` starts to verify the `registeredExecFile` reload test scenario again,so this pr add `ExtendedLevelDBTest` to `LevelDB` relevant tests in the `yarn` module so that the `MacOs/Apple Silicon` can skip the tests through `-Dtest.exclude.tags=org.apache.spark.tags.ExtendedLevelDBTest`. ### Why are the changes needed? According to convention, Add `ExtendedLevelDBTest` to LevelDB relevant tests to make `yarn` module can skip these tests through `-Dtest.exclude.tags=org.apache.spark.tags.ExtendedLevelDBTest` on `MacOs/Apple Silicon`. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Pass GitHub Actions - Manual test on `MacOs/Apple Silicon` ``` mvn clean install -pl resource-managers/yarn -Pyarn -am -DskipTests mvn clean install -pl resource-managers/yarn -Pyarn -Dtest.exclude.tags=org.apache.spark.tags.ExtendedLevelDBTest ``` **Before** ``` *** RUN ABORTED *** java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /Users/yangjie01/SourceCode/git/spark-source/resource-managers/yarn/target/tmp/libleveldbjni-64-1-7057248091178764836.8: dlopen(/Users/yangjie01/SourceCode/git/spark-source/resource-managers/yarn/target/tmp/libleveldbjni-64-1-7057248091178764836.8, 1): no suitable image found. Did find: /Users/yangjie01/SourceCode/git/spark-source/resource-managers/yarn/target/tmp/libleveldbjni-64-1-7057248091178764836.8: no matching architecture in universal wrapper /Users/yangjie01/SourceCode/git/spark-source/resource-managers/yarn/target/tmp/libleveldbjni-64-1-7057248091178764836.8: no matching architecture in universal wrapper] at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182) at org.fusesource.hawtjni.runtime.Library.load(Library.java:140) at org.fusesource.leveldbjni.JniDBFactory.<clinit>(JniDBFactory.java:48) at org.apache.spark.network.util.LevelDBProvider.initLevelDB(LevelDBProvider.java:48) at org.apache.spark.network.util.DBProvider.initDB(DBProvider.java:40) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.<init>(ExternalShuffleBlockResolver.java:131) at org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.<init>(ExternalShuffleBlockResolver.java:100) at org.apache.spark.network.shuffle.ExternalBlockHandler.<init>(ExternalBlockHandler.java:90) at org.apache.spark.network.yarn.YarnShuffleService.serviceInit(YarnShuffleService.java:276) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) ... ``` **After** ``` Run completed in 9 minutes, 46 seconds. Total number of tests run: 164 Suites: completed 23, aborted 0 Tests: succeeded 164, failed 0, canceled 1, ignored 0, pending 0 All tests passed. ``` Closes #38095 from LuciferYang/SPARK-40648. Authored-by: yangjie01 <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
### What changes were proposed in this pull request? In this PR I propose a new session config: spark.sql.ansi.double_quoted_identifiers (true | false) When true the parser will interpret a double quoted string not as a string-literal, but - in compliance with ANSI SQL - as an identifier. We do this by splitting the double-quoted literal from the STRING token, onto its own BACKQUOTED_STRING token in the lexer. in the grammar we replace all STRING references with a rule stringLit covering STRING and BACKQUOTED_STRING with the later being conditional on the config setting being false. (Note there already is a rule stringLiteral, hence the a tad quirky name). Similarly quotedIdentifier is extended with BACKQUOTED_STRING conditional on the config being true. Note that this is NOT PERFECT. The escape logic for strings (backslash) is different from that of identifiers (double-doublequotes). Unfortunately I do not know how to change this, since introducing a NEW token DOUBLE_QUOTED_IDENTIFIER has proven to break STRING - presumably due to the overlap in the pattern in the lexer. At this point I consider this an edge-case. ### Why are the changes needed? ANSI requires quotation of identifiers to use double quotes. We have seen customer requests for support especially around column aliases. But it makes sense to have a holistic fix rather than a context specific application. ### Does this PR introduce _any_ user-facing change? Yes, this is a new config introducing a new feature. It is not a breaking change, though. ### How was this patch tested? double_quoted_identifiers.sql was added to sqltests Closes #38022 from srielau/SPARK-40585-double-quoted-identifier. Lead-authored-by: Serge Rielau <[email protected]> Co-authored-by: Serge Rielau <[email protected]> Co-authored-by: Gengliang Wang <[email protected]> Signed-off-by: Gengliang Wang <[email protected]>
…ckend ### What changes were proposed in this pull request? Fix the shutdown hook call through to CoarseGrainedSchedulerBackend ### Why are the changes needed? Sometimes if the driver shuts down abnormally resources may be left dangling. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Existing tests. Closes #37885 from holdenk/shutdownhook-for-k8s. Lead-authored-by: Holden Karau <[email protected]> Co-authored-by: Holden Karau <[email protected]> Signed-off-by: Holden Karau <[email protected]>
… use toPandas() ### What changes were proposed in this pull request? Current connect `Collect()` return Pandas DataFrame, which does not match with PySpark DataFrame API which returns a `List[Row]`: https://github.com/apache/spark/blob/ceb8527413288b4d5c54d3afd76d00c9e26817a1/python/pyspark/sql/connect/data_frame.py#L227 https://github.com/apache/spark/blob/ceb8527413288b4d5c54d3afd76d00c9e26817a1/python/pyspark/sql/dataframe.py#L1119 The underlying implementation has been generating Pandas DataFrame though. In this case, we can choose to use to `toPandas()` and throw exception for `Collect()` to recommend to use `toPandas()`. ### Why are the changes needed? The goal of the connect project is still to align with existing data frame API as much as possible. In this case, given that `Collect()` is not compatible in existing python client, we can choose to disable it for now. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? UT Closes #38089 from amaliujia/SPARK-40645. Lead-authored-by: Rui Wang <[email protected]> Co-authored-by: Hyukjin Kwon <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
… proto ### What changes were proposed in this pull request? Support `SELECT *` in an explicit way by connect proto. ### Why are the changes needed? Current proto uses empty project list for `SELECT *`. However, this is an implicit way that it is hard to differentiate `not set` and `set but empty` (the latter is invalid plan). For longer term proto compatibility, we should always use explicit fields for passing through information. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? UT Closes #38023 from amaliujia/SPARK-40587. Authored-by: Rui Wang <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
### What changes were proposed in this pull request? This PR replaces `Random(hashing.byteswap32(index))` with `XORShiftRandom(index)` to distribute elements evenly across output partitions. ### Why are the changes needed? It seems that the distribution using `XORShiftRandom` is better. For example: 1. The number of output files has changed since SPARK-40407. [Some downstream projects](https://github.com/apache/iceberg/blob/c07f2aabc0a1d02f068ecf1514d2479c0fbdd3b0/spark/v3.3/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestRewriteDataFilesProcedure.java#L578-L579) use repartition to determine the number of output files in the test. ``` bin/spark-shell --master "local[2]" spark.range(10).repartition(10).write.mode("overwrite").parquet("/tmp/spark/repartition") ``` Before this PR and after SPARK-40407, the number of output files is 8. After this PR or before SPARK-40407, the number of output files is 10. 2. The distribution using `XORShiftRandom` seem better. ```scala import java.util.Random import org.apache.spark.util.random.XORShiftRandom import scala.util.hashing def distribution(count: Int, partition: Int) = { println((1 to count).map(partitionId => new Random(partitionId).nextInt(partition)) .groupBy(f => f) .map(_._2.size).mkString(". ")) println((1 to count).map(partitionId => new Random(hashing.byteswap32(partitionId)).nextInt(partition)) .groupBy(f => f) .map(_._2.size).mkString(". ")) println((1 to count).map(partitionId => new XORShiftRandom(partitionId).nextInt(partition)) .groupBy(f => f) .map(_._2.size).mkString(". ")) } distribution(200, 4) ``` The output: ``` 200 50. 60. 46. 44 55. 48. 43. 54 ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Unit test. Closes #38106 from wangyum/SPARK-40660. Authored-by: Yuming Wang <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…hadoop2` ### What changes were proposed in this pull request? This pr adds the de duplication process in the `yarn.Client#populateClasspath` method when `Utils.isTesting` is true to ensure that `ENV_DIST_CLASSPATH` will only add the part of `extraClassPath` that does not exist to `CLASSPATH` to avoid `java.io.IOException: error=7, Argument list too long` in this way. ### Why are the changes needed? Fix daily test failed of yarn module with `-Phadoop-2`. [Daily test failed](https://github.com/apache/spark/actions/runs/3174476348/jobs/5171331515) as follows: ``` Exception message: Cannot run program "bash" (in directory "/home/runner/work/spark/spark/resource-managers/yarn/target/org.apache.spark.deploy.yarn.YarnClusterSuite/org.apache.spark.deploy.yarn.YarnClusterSuite-localDir-nm-0_0/usercache/runner/appcache/application_1664721938509_0027/container_1664721938509_0027_02_000001"): error=7, Argument list too long 22096[info] Stack trace: java.io.IOException: Cannot run program "bash" (in directory "/home/runner/work/spark/spark/resource-managers/yarn/target/org.apache.spark.deploy.yarn.YarnClusterSuite/org.apache.spark.deploy.yarn.YarnClusterSuite-localDir-nm-0_0/usercache/runner/appcache/application_1664721938509_0027/container_1664721938509_0027_02_000001"): error=7, Argument list too long 22097[info] at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) 22098[info] at org.apache.hadoop.util.Shell.runCommand(Shell.java:526) 22099[info] at org.apache.hadoop.util.Shell.run(Shell.java:482) 22100[info] at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776) 22101[info] at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) 22102[info] at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 22103[info] at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 22104[info] at java.util.concurrent.FutureTask.run(FutureTask.java:266) 22105[info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 22106[info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 22107[info] at java.lang.Thread.run(Thread.java:750) 22108[info] Caused by: java.io.IOException: error=7, Argument list too long 22109[info] at java.lang.UNIXProcess.forkAndExec(Native Method) 22110[info] at java.lang.UNIXProcess.<init>(UNIXProcess.java:247) 22111[info] at java.lang.ProcessImpl.start(ProcessImpl.java:134) 22112[info] at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) 22113[info] ... 10 more 22114[info] ``` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Pass GitHub Actions - Verify test with `hadoop2` is successful: https://github.com/LuciferYang/spark/actions/runs/3175111616/jobs/5172833416 Closes #38079 from LuciferYang/SPARK-40635. Authored-by: yangjie01 <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
…CY_ERROR_TEMP_2000-2025 ### What changes were proposed in this pull request? This PR proposes to migrate 26 execution errors onto temporary error classes with the prefix `_LEGACY_ERROR_TEMP_2000` to `_LEGACY_ERROR_TEMP_2024`. The error classes are prefixed with `_LEGACY_ERROR_TEMP_` indicates the dev-facing error messages, and won't be exposed to end users. ### Why are the changes needed? To speed-up the error class migration. The migration on temporary error classes allow us to analyze the errors, so we can detect the most popular error classes. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? ``` $ build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite" $ build/sbt "test:testOnly *SQLQuerySuite" $ build/sbt -Phadoop-3 -Phive-thriftserver catalyst/test hive-thriftserver/test ``` Closes #38104 from itholic/SPARK-40540-2000. Authored-by: itholic <[email protected]> Signed-off-by: Max Gekk <[email protected]>
pull bot
pushed a commit
that referenced
this pull request
May 1, 2024
… spark docker image ### What changes were proposed in this pull request? The pr aims to update the packages name removed in building the spark docker image. ### Why are the changes needed? When our default image base was switched from `ubuntu 20.04` to `ubuntu 22.04`, the unused installation package in the base image has changed, in order to eliminate some warnings in building images and free disk space more accurately, we need to correct it. Before: ``` #35 [29/31] RUN apt-get remove --purge -y '^aspnet.*' '^dotnet-.*' '^llvm-.*' 'php.*' '^mongodb-.*' snapd google-chrome-stable microsoft-edge-stable firefox azure-cli google-cloud-sdk mono-devel powershell libgl1-mesa-dri || true #35 0.489 Reading package lists... #35 0.505 Building dependency tree... #35 0.507 Reading state information... #35 0.511 E: Unable to locate package ^aspnet.* #35 0.511 E: Couldn't find any package by glob '^aspnet.*' #35 0.511 E: Couldn't find any package by regex '^aspnet.*' #35 0.511 E: Unable to locate package ^dotnet-.* #35 0.511 E: Couldn't find any package by glob '^dotnet-.*' #35 0.511 E: Couldn't find any package by regex '^dotnet-.*' #35 0.511 E: Unable to locate package ^llvm-.* #35 0.511 E: Couldn't find any package by glob '^llvm-.*' #35 0.511 E: Couldn't find any package by regex '^llvm-.*' #35 0.511 E: Unable to locate package ^mongodb-.* #35 0.511 E: Couldn't find any package by glob '^mongodb-.*' #35 0.511 EPackage 'php-crypt-gpg' is not installed, so not removed #35 0.511 Package 'php' is not installed, so not removed #35 0.511 : Couldn't find any package by regex '^mongodb-.*' #35 0.511 E: Unable to locate package snapd #35 0.511 E: Unable to locate package google-chrome-stable #35 0.511 E: Unable to locate package microsoft-edge-stable #35 0.511 E: Unable to locate package firefox #35 0.511 E: Unable to locate package azure-cli #35 0.511 E: Unable to locate package google-cloud-sdk #35 0.511 E: Unable to locate package mono-devel #35 0.511 E: Unable to locate package powershell #35 DONE 0.5s #36 [30/31] RUN apt-get autoremove --purge -y #36 0.063 Reading package lists... #36 0.079 Building dependency tree... #36 0.082 Reading state information... #36 0.088 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. #36 DONE 0.4s ``` After: ``` #38 [32/36] RUN apt-get remove --purge -y 'gfortran-11' 'humanity-icon-theme' 'nodejs-doc' || true #38 0.066 Reading package lists... #38 0.087 Building dependency tree... #38 0.089 Reading state information... #38 0.094 The following packages were automatically installed and are no longer required: #38 0.094 at-spi2-core bzip2-doc dbus-user-session dconf-gsettings-backend #38 0.095 dconf-service gsettings-desktop-schemas gtk-update-icon-cache #38 0.095 hicolor-icon-theme libatk-bridge2.0-0 libatk1.0-0 libatk1.0-data #38 0.095 libatspi2.0-0 libbz2-dev libcairo-gobject2 libcolord2 libdconf1 libepoxy0 #38 0.095 libgfortran-11-dev libgtk-3-common libjs-highlight.js libllvm11 #38 0.095 libncurses-dev libncurses5-dev libphobos2-ldc-shared98 libreadline-dev #38 0.095 librsvg2-2 librsvg2-common libvte-2.91-common libwayland-client0 #38 0.095 libwayland-cursor0 libwayland-egl1 libxdamage1 libxkbcommon0 #38 0.095 session-migration tilix-common xkb-data #38 0.095 Use 'apt autoremove' to remove them. #38 0.096 The following packages will be REMOVED: #38 0.096 adwaita-icon-theme* gfortran* gfortran-11* humanity-icon-theme* libgtk-3-0* #38 0.096 libgtk-3-bin* libgtkd-3-0* libvte-2.91-0* libvted-3-0* nodejs-doc* #38 0.096 r-base-dev* tilix* ubuntu-mono* #38 0.248 0 upgraded, 0 newly installed, 13 to remove and 0 not upgraded. #38 0.248 After this operation, 99.6 MB disk space will be freed. ... (Reading database ... 70597 files and directories currently installed.) #38 0.304 Removing r-base-dev (4.1.2-1ubuntu2) ... #38 0.319 Removing gfortran (4:11.2.0-1ubuntu1) ... #38 0.340 Removing gfortran-11 (11.4.0-1ubuntu1~22.04) ... #38 0.356 Removing tilix (1.9.4-2build1) ... #38 0.377 Removing libvted-3-0:amd64 (3.10.0-1ubuntu1) ... #38 0.392 Removing libvte-2.91-0:amd64 (0.68.0-1) ... #38 0.407 Removing libgtk-3-bin (3.24.33-1ubuntu2) ... #38 0.422 Removing libgtkd-3-0:amd64 (3.10.0-1ubuntu1) ... #38 0.436 Removing nodejs-doc (12.22.9~dfsg-1ubuntu3.4) ... #38 0.457 Removing libgtk-3-0:amd64 (3.24.33-1ubuntu2) ... #38 0.488 Removing ubuntu-mono (20.10-0ubuntu2) ... #38 0.754 Removing humanity-icon-theme (0.6.16) ... #38 1.362 Removing adwaita-icon-theme (41.0-1ubuntu1) ... #38 1.537 Processing triggers for libc-bin (2.35-0ubuntu3.6) ... #38 1.566 Processing triggers for mailcap (3.70+nmu1ubuntu1) ... #38 1.577 Processing triggers for libglib2.0-0:amd64 (2.72.4-0ubuntu2.2) ... (Reading database ... 56946 files and directories currently installed.) #38 1.645 Purging configuration files for libgtk-3-0:amd64 (3.24.33-1ubuntu2) ... #38 1.657 Purging configuration files for ubuntu-mono (20.10-0ubuntu2) ... #38 1.670 Purging configuration files for humanity-icon-theme (0.6.16) ... #38 1.682 Purging configuration files for adwaita-icon-theme (41.0-1ubuntu1) ... #38 DONE 1.7s #39 [33/36] RUN apt-get autoremove --purge -y #39 0.061 Reading package lists... #39 0.075 Building dependency tree... #39 0.077 Reading state information... #39 0.083 The following packages will be REMOVED: #39 0.083 at-spi2-core* bzip2-doc* dbus-user-session* dconf-gsettings-backend* #39 0.083 dconf-service* gsettings-desktop-schemas* gtk-update-icon-cache* #39 0.083 hicolor-icon-theme* libatk-bridge2.0-0* libatk1.0-0* libatk1.0-data* #39 0.083 libatspi2.0-0* libbz2-dev* libcairo-gobject2* libcolord2* libdconf1* #39 0.083 libepoxy0* libgfortran-11-dev* libgtk-3-common* libjs-highlight.js* #39 0.083 libllvm11* libncurses-dev* libncurses5-dev* libphobos2-ldc-shared98* #39 0.083 libreadline-dev* librsvg2-2* librsvg2-common* libvte-2.91-common* #39 0.083 libwayland-client0* libwayland-cursor0* libwayland-egl1* libxdamage1* #39 0.083 libxkbcommon0* session-migration* tilix-common* xkb-data* #39 0.231 0 upgraded, 0 newly installed, 36 to remove and 0 not upgraded. #39 0.231 After this operation, 124 MB disk space will be freed. ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? - Manually test. - Pass GA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#46258 from panbingkun/remove_packages_on_ubuntu. Authored-by: panbingkun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
pull bot
pushed a commit
that referenced
this pull request
May 17, 2024
…dundant SYSTEM password reset ### What changes were proposed in this pull request? This pull request improves the Oracle JDBC tests by skipping the redundant SYSTEM password reset. ### Why are the changes needed? These changes are necessary to clean up the Oracle JDBC tests. This pull request effectively reverts the modifications introduced in [SPARK-46592](https://issues.apache.org/jira/browse/SPARK-46592) and [PR apache#44594](apache#44594), which attempted to work around the sporadic occurrence of ORA-65048 and ORA-04021 errors by setting the Oracle parameter DDL_LOCK_TIMEOUT. As discussed in [issue #35](gvenzl/oci-oracle-free#35), setting DDL_LOCK_TIMEOUT did not resolve the issue. The root cause appears to be an Oracle bug or unwanted behavior related to the use of Pluggable Database (PDB) rather than the expected functionality of Oracle itself. Additionally, with [SPARK-48141](https://issues.apache.org/jira/browse/SPARK-48141), we have upgraded the Oracle version used in the tests to Oracle Free 23ai, version 23.4. This upgrade should help address some of the issues observed with the previous Oracle version. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? This patch was tested using the existing test suite, with a particular focus on Oracle JDBC tests. The following steps were executed: ``` export ENABLE_DOCKER_INTEGRATION_TESTS=1 ./build/sbt -Pdocker-integration-tests "docker-integration-tests/testOnly org.apache.spark.sql.jdbc.OracleIntegrationSuite" ``` ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#46598 from LucaCanali/fixOracleIntegrationTests. Lead-authored-by: Kent Yao <[email protected]> Co-authored-by: Luca Canali <[email protected]> Signed-off-by: Kent Yao <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )