Skip to content

Commit a6dd607

Browse files
committed
[SPARK-39273][PS][TESTS] Make PandasOnSparkTestCase inherit ReusedSQLTestCase
### What changes were proposed in this pull request? This PR proposes to make `PandasOnSparkTestCase` inherit `ReusedSQLTestCase`. ### Why are the changes needed? We don't need this: ```python classmethod def tearDownClass(cls): # We don't stop Spark session to reuse across all tests. # The Spark session will be started and stopped at PyTest session level. # Please see pyspark/pandas/conftest.py. pass ``` anymore in Apache Spark. This has existed to speed up the tests when the codes are in Koalas repository where the tests run sequentially in single process. In Apache Spark, we run in multiple processes, and we don't need this anymore. ### Does this PR introduce _any_ user-facing change? No, test-only. ### How was this patch tested? Existing CI should test it out. Closes apache#36652 from HyukjinKwon/SPARK-39273. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
1 parent 8052751 commit a6dd607

File tree

1 file changed

+4
-13
lines changed

1 file changed

+4
-13
lines changed

python/pyspark/testing/pandasutils.py

Lines changed: 4 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@
1818
import functools
1919
import shutil
2020
import tempfile
21-
import unittest
2221
import warnings
2322
from contextlib import contextmanager
2423
from distutils.version import LooseVersion
@@ -32,9 +31,8 @@
3231
from pyspark.pandas.frame import DataFrame
3332
from pyspark.pandas.indexes import Index
3433
from pyspark.pandas.series import Series
35-
from pyspark.pandas.utils import default_session, SPARK_CONF_ARROW_ENABLED
36-
from pyspark.testing.sqlutils import SQLTestUtils
37-
34+
from pyspark.pandas.utils import SPARK_CONF_ARROW_ENABLED
35+
from pyspark.testing.sqlutils import ReusedSQLTestCase
3836

3937
tabulate_requirement_message = None
4038
try:
@@ -61,19 +59,12 @@
6159
have_plotly = plotly_requirement_message is None
6260

6361

64-
class PandasOnSparkTestCase(unittest.TestCase, SQLTestUtils):
62+
class PandasOnSparkTestCase(ReusedSQLTestCase):
6563
@classmethod
6664
def setUpClass(cls):
67-
cls.spark = default_session()
65+
super(PandasOnSparkTestCase, cls).setUpClass()
6866
cls.spark.conf.set(SPARK_CONF_ARROW_ENABLED, True)
6967

70-
@classmethod
71-
def tearDownClass(cls):
72-
# We don't stop Spark session to reuse across all tests.
73-
# The Spark session will be started and stopped at PyTest session level.
74-
# Please see pyspark/pandas/conftest.py.
75-
pass
76-
7768
def assertPandasEqual(self, left, right, check_exact=True):
7869
if isinstance(left, pd.DataFrame) and isinstance(right, pd.DataFrame):
7970
try:

0 commit comments

Comments
 (0)