Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
120 commits
Select commit Hold shift + click to select a range
46620f2
Moving one-line linting and checks from lint.sh to azure steps
datapythonista Sep 27, 2018
dc8c528
Moving everything from lint.sh to azure (so far repeating the command…
datapythonista Sep 27, 2018
adeb0ca
Fixing continueOnError settings
datapythonista Sep 27, 2018
dcfb203
Reverting the exit status of the grep commands, we want them to fail …
datapythonista Sep 27, 2018
b765df8
Testing the continueOnError, doesn't seem to be working at job level,…
datapythonista Sep 27, 2018
3c77fc9
Escaping exclamation marks to reverse the exist status of grep, azure…
datapythonista Sep 27, 2018
3d8af7d
Adding the continueOnError to every step, and trying if bash instead …
datapythonista Sep 27, 2018
ed1064d
Fixes to missing continueOnError, and to the grep command exit status
datapythonista Sep 27, 2018
8cb7b0f
Testing couple of ways to invert the exit status of grep
datapythonista Sep 27, 2018
dee5a08
Fixing azure config syntax error
datapythonista Sep 27, 2018
54f9e98
Fixing the checks (they should be ok now), and adding an intentional …
datapythonista Sep 27, 2018
20cb360
Removing intentional pep8 issue, and fixing last known problems
datapythonista Sep 27, 2018
50d2757
fixing style of azure settings
datapythonista Sep 27, 2018
19a213c
Replacing continueOnError (that converts errors in warnings) to condi…
datapythonista Sep 29, 2018
0cf5da9
Fixing multiline in yaml, and removing job level continueOnError
datapythonista Sep 29, 2018
4d10702
Changing the format of flake8 output to azure (so it creates links, a…
datapythonista Sep 30, 2018
a356f03
Removing unnecessary quotes in flake8 format config
datapythonista Sep 30, 2018
50cb867
flake8 format breaks pip (ConfigParser) when present in setup.cfg, mo…
datapythonista Sep 30, 2018
091193c
Adding azure formatting to all failing flake8
datapythonista Sep 30, 2018
4a2ddba
Merging from master
datapythonista Oct 10, 2018
e7c276a
Fixing azure job name
datapythonista Oct 10, 2018
8f73c08
Making the format look good for flake8 and patterns, and adding file …
datapythonista Oct 12, 2018
87b5048
Trying if environment variables can be defined in the script setting
datapythonista Oct 12, 2018
dc2e3a6
Merging from master
datapythonista Nov 4, 2018
5ab03ab
Removing file with errors to be displayed, and comment in the setup.cfg
datapythonista Nov 4, 2018
ca9e12a
Changing cython cast grep to invgrep
datapythonista Nov 4, 2018
c591a17
Merging from master
datapythonista Nov 9, 2018
c671da5
Building source of the checks build and moving the documentation buil…
datapythonista Nov 9, 2018
a6eed3e
Adding dependencies file for the checks and doc build
datapythonista Nov 9, 2018
3b853f9
Renaming conda yaml env variable
datapythonista Nov 9, 2018
850202d
Fixing env variables to set up conda
datapythonista Nov 9, 2018
a896532
Debugging why azure is not installing desired dependencies
datapythonista Nov 9, 2018
bed55be
Fixing once more env variables
datapythonista Nov 9, 2018
e555ce0
Fixing indentation in dependencies file
datapythonista Nov 9, 2018
101f7f3
Adding missing env variables and dependencies
datapythonista Nov 10, 2018
fce22e6
Fixing unset env variables for the doc build
datapythonista Nov 10, 2018
167f6dc
Fixing directory for the doc build script
datapythonista Nov 10, 2018
d44f189
WIP: Testing a simpler way to build and upload the documentation
datapythonista Nov 10, 2018
906da22
Printing installed dependencies in checks job (temporary, to debug py…
datapythonista Nov 10, 2018
9fb3999
Printing installed dependencies in azure, for debugging
datapythonista Nov 11, 2018
c72c0e5
Fixing bug when uploading the docs to github pages
datapythonista Nov 11, 2018
de1f52f
Adding an extra error to the validation of docstrings, as it's been f…
datapythonista Nov 11, 2018
e4aa371
Adding new docstring error code to the validation, and tests with the…
datapythonista Nov 11, 2018
f33b6e6
Merging from master
datapythonista Nov 12, 2018
63cfd5b
Generating the script of the deps check in azure format
datapythonista Nov 12, 2018
eb558df
Publishing docs with azure artifacts instead of github pages, and add…
datapythonista Nov 12, 2018
371f06e
Removing unnecessary checks on travis on whether the doc is being bui…
datapythonista Nov 13, 2018
ca6e910
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 13, 2018
d262a04
Debugging why pyqt is not being installed
datapythonista Nov 13, 2018
fc4574c
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 13, 2018
168ec55
Adding section names check to validate docstrings, now that all wrong…
datapythonista Nov 13, 2018
2f1b270
Fixing conda install pyqt for debugging
datapythonista Nov 13, 2018
798698d
Merging from master
datapythonista Nov 19, 2018
a3f601c
Using github pages to publish the documentation
datapythonista Nov 19, 2018
5ea21c6
Testing if linux variables can be defined once for the whole job
datapythonista Nov 19, 2018
46d281c
Using env for the environment variables, and implemented azure storag…
datapythonista Nov 19, 2018
faf49e9
Fixing typo
datapythonista Nov 19, 2018
fa4f16c
Setting environment variables at the beginning of the job, and adding…
datapythonista Nov 19, 2018
c05f6b7
Fixing /home/mgarcia/miniconda3/envs/pandas-dev/bin:/home/mgarcia/min…
datapythonista Nov 19, 2018
9162aeb
Fixed typo in PATH
datapythonista Nov 19, 2018
5640907
Formatting flake8-rst output for azure
datapythonista Nov 19, 2018
0286099
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 20, 2018
629a209
Updating azure connection string
datapythonista Nov 20, 2018
588d153
Checking if environment.yml can be used for the docs and checks build…
datapythonista Nov 20, 2018
bc9c3b3
Remove unused stuff from azure config
datapythonista Nov 20, 2018
324b1e2
Doing checks as soon as possible (before conda, before build)
datapythonista Nov 20, 2018
5bf1709
Fixing build errors (typo in dependencies, missing cpplint, code chec…
datapythonista Nov 20, 2018
7cf2d68
Addapting generate_pip_deps_from_conda.py to support pip dependencies…
datapythonista Nov 20, 2018
2f8441e
Setting DISPLAY env to see if that fixes the to_clipboard problem
datapythonista Nov 20, 2018
30ba23e
Consistent way to source the conda environment
datapythonista Nov 20, 2018
85172b3
Fixing typo in environment name, and removing pyqt debug info
datapythonista Nov 20, 2018
450f84a
Adding checkpoints to see where the docstests change the exit status
datapythonista Nov 20, 2018
7b4b1ea
Restoring required html5lib dependency
datapythonista Nov 20, 2018
5d69e8b
Removing travis doc job
datapythonista Nov 20, 2018
3ce1aa0
Removing linting comments, and token for the docs from travis
datapythonista Nov 20, 2018
ee01df4
removing checkpoints and increasing verbosity of failing doctests
datapythonista Nov 20, 2018
e353e8a
Restoring bf4 in travis deps, as it's required, and fixing the to_cli…
datapythonista Nov 21, 2018
92a3921
Moving scripts unit tests to azure
datapythonista Nov 21, 2018
9e34037
Adding conda environment to run scripts tests, and changing verbosity…
datapythonista Nov 21, 2018
5a82f59
Using connection string secret variable instead of composing it in th…
datapythonista Nov 21, 2018
6cbb31b
Adding benchmark run to the CI
datapythonista Nov 22, 2018
2617696
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 22, 2018
75ad473
Adding git remote for the benchmarks, and removing connection string …
datapythonista Nov 22, 2018
7af04e1
Providing azure storage info as arguments instead of env, and fetchin…
datapythonista Nov 22, 2018
cc4331c
Trying to fix benchmarks and azure storage issues
datapythonista Nov 22, 2018
c5df401
Testing if azure storage key has a missing =, and setting up machine …
datapythonista Nov 22, 2018
4eac8bb
More changes to the doc upload and the benchmarks, to try to make the…
datapythonista Nov 22, 2018
00032d1
More fixes to doc upload and benchmarks
datapythonista Nov 22, 2018
e9ab754
Adding debug info for the docs upload and the benchmarks
datapythonista Nov 22, 2018
e47b4e1
Fixing bug in benchmarks set up, and fixing docstring errors, so vali…
datapythonista Nov 22, 2018
d0d4ae1
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 23, 2018
a225d44
Clean up of debug info
datapythonista Nov 23, 2018
d6d5c66
Uploading docs to GitHub pages in the same way as travis
datapythonista Nov 23, 2018
f7a6048
Adding variable group to the job config, to see if that makes them ac…
datapythonista Nov 23, 2018
9aa18d0
WIP: Restoring travis documentation build
datapythonista Nov 23, 2018
283233d
Restoing documentation build in travis
datapythonista Nov 23, 2018
991304d
Removing variable group, as variables are not in a group anymore, and…
datapythonista Nov 27, 2018
3a71185
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 27, 2018
19c396f
Adding missing conda environment activation
datapythonista Nov 27, 2018
a69e667
Fixing docstring errors
datapythonista Nov 27, 2018
59d55d8
Removing documentation build from azure
datapythonista Nov 28, 2018
df50c58
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 28, 2018
455e77c
Adding debug info to diagnose /home/mgarcia/miniconda3/bin:/home/mgar…
datapythonista Nov 28, 2018
7f48ac2
adding more debug information
datapythonista Nov 28, 2018
078a987
Revert "Removing documentation build from azure"
datapythonista Nov 28, 2018
d7883a1
Clean up of debug commands
datapythonista Nov 28, 2018
c2be491
Removing doc build in azure (restored because of the path problem)
datapythonista Nov 28, 2018
910825a
Building docs in azure, and uploading them to azure storage if it's a…
datapythonista Nov 29, 2018
35c7d99
Merge remote-tracking branch 'upstream/master' into azure_lint
datapythonista Nov 30, 2018
d321a42
Merging from master
datapythonista Nov 30, 2018
4328b04
Fixing pending docstrings
datapythonista Dec 1, 2018
4c57f48
Updating dependencies with the version that fixes the speed problems
datapythonista Dec 1, 2018
497f032
Updating pip requirements
datapythonista Dec 1, 2018
c88911a
Always uploading docs (removed if to test if uploading the docs work …
datapythonista Dec 1, 2018
75b89eb
Adding parallelization to build and docs
datapythonista Dec 1, 2018
011950e
Removing documentation build in azure, and reverting using more than …
datapythonista Dec 1, 2018
01942b9
Made an invgrep a newly added pattern validation
datapythonista Dec 1, 2018
0a14165
Merging from master
datapythonista Dec 1, 2018
705eb9d
Regenerating pip dependencies
datapythonista Dec 1, 2018
498cebb
Merging from master
datapythonista Dec 2, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Merging from master
  • Loading branch information
datapythonista committed Nov 19, 2018
commit 798698d1674f9076ddc00f8e3b2e1644769396e5
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,5 @@ doc/build/html/index.html
# Windows specific leftover:
doc/tmp.sv
doc/source/styled.xlsx
doc/source/templates/
env/
doc/source/savefig/
1 change: 0 additions & 1 deletion .pep8speaks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ pycodestyle:
- W503, # line break before binary operator
- W504, # line break after binary operator
- E402, # module level import not at top of file
- E722, # do not use bare except
- E731, # do not assign a lambda expression, use a def
- C406, # Unnecessary list literal - rewrite as a dict literal.
- C408, # Unnecessary dict call - rewrite as a literal.
Expand Down
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ env:

git:
# for cloning
depth: 1000
depth: 1500

matrix:
fast_finish: true
Expand Down
38 changes: 31 additions & 7 deletions asv_bench/benchmarks/timeseries.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from datetime import timedelta

import dateutil
import numpy as np
from pandas import to_datetime, date_range, Series, DataFrame, period_range
from pandas.tseries.frequencies import infer_freq
Expand Down Expand Up @@ -57,7 +58,10 @@ def time_to_pydatetime(self, index_type):

class TzLocalize(object):

def setup(self):
params = [None, 'US/Eastern', 'UTC', dateutil.tz.tzutc()]
param_names = 'tz'

def setup(self, tz):
dst_rng = date_range(start='10/29/2000 1:00:00',
end='10/29/2000 1:59:59', freq='S')
self.index = date_range(start='10/29/2000',
Expand All @@ -68,8 +72,8 @@ def setup(self):
end='10/29/2000 3:00:00',
freq='S'))

def time_infer_dst(self):
self.index.tz_localize('US/Eastern', ambiguous='infer')
def time_infer_dst(self, tz):
self.index.tz_localize(tz, ambiguous='infer')


class ResetIndex(object):
Expand Down Expand Up @@ -377,15 +381,35 @@ def time_dup_string_tzoffset_dates(self, cache):

class DatetimeAccessor(object):

def setup(self):
params = [None, 'US/Eastern', 'UTC', dateutil.tz.tzutc()]
param_names = 'tz'

def setup(self, tz):
N = 100000
self.series = Series(date_range(start='1/1/2000', periods=N, freq='T'))
self.series = Series(
date_range(start='1/1/2000', periods=N, freq='T', tz=tz)
)

def time_dt_accessor(self):
def time_dt_accessor(self, tz):
self.series.dt

def time_dt_accessor_normalize(self):
def time_dt_accessor_normalize(self, tz):
self.series.dt.normalize()

def time_dt_accessor_month_name(self, tz):
self.series.dt.month_name()

def time_dt_accessor_day_name(self, tz):
self.series.dt.day_name()

def time_dt_accessor_time(self, tz):
self.series.dt.time

def time_dt_accessor_date(self, tz):
self.series.dt.date

def time_dt_accessor_year(self, tz):
self.series.dt.year


from .pandas_vb_common import setup # noqa: F401
18 changes: 16 additions & 2 deletions asv_bench/benchmarks/timestamp.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

from pandas import Timestamp
import pytz
import dateutil


class TimestampConstruction(object):
Expand Down Expand Up @@ -29,7 +30,8 @@ def time_fromtimestamp(self):


class TimestampProperties(object):
_tzs = [None, pytz.timezone('Europe/Amsterdam')]
_tzs = [None, pytz.timezone('Europe/Amsterdam'), pytz.UTC,
dateutil.tz.tzutc()]
_freqs = [None, 'B']
params = [_tzs, _freqs]
param_names = ['tz', 'freq']
Expand Down Expand Up @@ -87,7 +89,8 @@ def time_microsecond(self, tz, freq):


class TimestampOps(object):
params = [None, 'US/Eastern']
params = [None, 'US/Eastern', pytz.UTC,
dateutil.tz.tzutc()]
param_names = ['tz']

def setup(self, tz):
Expand All @@ -102,6 +105,17 @@ def time_replace_None(self, tz):
def time_to_pydatetime(self, tz):
self.ts.to_pydatetime()

def time_normalize(self, tz):
self.ts.normalize()

def time_tz_convert(self, tz):
if self.ts.tz is not None:
self.ts.tz_convert(tz)

def time_tz_localize(self, tz):
if self.ts.tz is None:
self.ts.tz_localize(tz)


class TimestampAcrossDst(object):
def setup(self):
Expand Down
3 changes: 2 additions & 1 deletion ci/deps/travis-36-doc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,11 @@ dependencies:
- bottleneck
- cython>=0.28.2
- fastparquet
- gitpython
- html5lib
- hypothesis>=3.58.0
- ipykernel
- ipython==6.5.0
- ipython
- ipywidgets
- lxml
- matplotlib
Expand Down
2 changes: 1 addition & 1 deletion ci/deps/travis-36.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ dependencies:
- fastparquet
- flake8>=3.5
- flake8-comprehensions
- flake8-rst
- flake8-rst=0.4.2
- gcsfs
- geopandas
- html5lib
Expand Down
7 changes: 6 additions & 1 deletion doc/make.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,12 @@ def _process_single_doc(self, single_doc):
self.single_doc = 'api'
elif os.path.exists(os.path.join(SOURCE_PATH, single_doc)):
self.single_doc_type = 'rst'
self.single_doc = os.path.splitext(os.path.basename(single_doc))[0]

if 'whatsnew' in single_doc:
basename = single_doc
else:
basename = os.path.basename(single_doc)
self.single_doc = os.path.splitext(basename)[0]
elif os.path.exists(
os.path.join(SOURCE_PATH, '{}.rst'.format(single_doc))):
self.single_doc_type = 'rst'
Expand Down
1 change: 1 addition & 0 deletions doc/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1724,6 +1724,7 @@ MultiIndex Components
MultiIndex.set_levels
MultiIndex.set_labels
MultiIndex.to_hierarchical
MultiIndex.to_flat_index
MultiIndex.to_frame
MultiIndex.is_lexsorted
MultiIndex.sortlevel
Expand Down
49 changes: 43 additions & 6 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.append(os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../sphinxext'))

sys.path.extend([

# numpy standard doc extensions
Expand Down Expand Up @@ -75,6 +74,7 @@
'sphinx.ext.ifconfig',
'sphinx.ext.linkcode',
'nbsphinx',
'contributors', # custom pandas extension
]

try:
Expand Down Expand Up @@ -120,7 +120,9 @@
templates_path = ['../_templates']

# The suffix of source filenames.
source_suffix = '.rst'
source_suffix = [
'.rst',
]

# The encoding of source files.
source_encoding = 'utf-8'
Expand Down Expand Up @@ -298,8 +300,26 @@
for page in moved_api_pages
}


common_imports = """\
.. currentmodule:: pandas

.. ipython:: python
:suppress:

import numpy as np
from pandas import *
import pandas as pd
randn = np.random.randn
np.set_printoptions(precision=4, suppress=True)
options.display.max_rows = 15
from pandas.compat import StringIO
"""


html_context = {
'redirects': {old: new for old, new in moved_api_pages}
'redirects': {old: new for old, new in moved_api_pages},
'common_imports': common_imports,
}

# If false, no module index is generated.
Expand Down Expand Up @@ -388,6 +408,7 @@
category=FutureWarning)


ipython_warning_is_error = False
ipython_exec_lines = [
'import numpy as np',
'import pandas as pd',
Expand Down Expand Up @@ -565,7 +586,7 @@ def linkcode_resolve(domain, info):
for part in fullname.split('.'):
try:
obj = getattr(obj, part)
except:
except AttributeError:
return None

try:
Expand All @@ -574,14 +595,14 @@ def linkcode_resolve(domain, info):
fn = inspect.getsourcefile(inspect.unwrap(obj))
else:
fn = inspect.getsourcefile(obj)
except:
except TypeError:
fn = None
if not fn:
return None

try:
source, lineno = inspect.getsourcelines(obj)
except:
except OSError:
lineno = None

if lineno:
Expand Down Expand Up @@ -653,7 +674,23 @@ def process_class_docstrings(app, what, name, obj, options, lines):
]


def rstjinja(app, docname, source):
"""
Render our pages as a jinja template for fancy templating goodness.
"""
# http://ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/
# Make sure we're outputting HTML
if app.builder.format != 'html':
return
src = source[0]
rendered = app.builder.templates.render_string(
src, app.config.html_context
)
source[0] = rendered


def setup(app):
app.connect("source-read", rstjinja)
app.connect("autodoc-process-docstring", remove_flags_docstring)
app.connect("autodoc-process-docstring", process_class_docstrings)
app.add_autodocumenter(AccessorDocumenter)
Expand Down
21 changes: 7 additions & 14 deletions doc/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -591,21 +591,14 @@ run this slightly modified command::

git diff master --name-only -- "*.py" | grep "pandas/" | xargs flake8

Note that on Windows, these commands are unfortunately not possible because
commands like ``grep`` and ``xargs`` are not available natively. To imitate the
behavior with the commands above, you should run::
Windows does not support the ``grep`` and ``xargs`` commands (unless installed
for example via the `MinGW <http://www.mingw.org/>`__ toolchain), but one can
imitate the behaviour as follows::

git diff master --name-only -- "*.py"
for /f %i in ('git diff upstream/master --name-only ^| findstr pandas/') do flake8 %i

This will list all of the Python files that have been modified. The only ones
that matter during linting are any whose directory filepath begins with "pandas."
For each filepath, copy and paste it after the ``flake8`` command as shown below:

flake8 <python-filepath>

Alternatively, you can install the ``grep`` and ``xargs`` commands via the
`MinGW <http://www.mingw.org/>`__ toolchain, and it will allow you to run the
commands above.
This will also get all the files being changed by the PR (and within the
``pandas/`` folder), and run ``flake8`` on them one after the other.

.. _contributing.import-formatting:

Expand Down Expand Up @@ -1103,7 +1096,7 @@ Information on how to write a benchmark and how to use asv can be found in the
Documenting your code
---------------------

Changes should be reflected in the release notes located in ``doc/source/whatsnew/vx.y.z.txt``.
Changes should be reflected in the release notes located in ``doc/source/whatsnew/vx.y.z.rst``.
This file contains an ongoing change log for each release. Add an entry to this file to
document your fix, enhancement or (unavoidable) breaking change. Make sure to include the
GitHub issue number when adding your entry (using ``:issue:`1234``` where ``1234`` is the
Expand Down
4 changes: 2 additions & 2 deletions doc/source/index.rst.template
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ See the package overview for more detail about what's in the library.
{{ single_doc }}
{% endif -%}
{% if not single_doc -%}
whatsnew
What's New <whatsnew/v0.24.0>
install
contributing
overview
Expand Down Expand Up @@ -159,5 +159,5 @@ See the package overview for more detail about what's in the library.
developer
internals
extending
release
releases
{% endif -%}
4 changes: 3 additions & 1 deletion doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,9 @@ Optional Dependencies
`xsel <http://www.vergenet.net/~conrad/software/xsel/>`__, or
`xclip <https://github.com/astrand/xclip/>`__: necessary to use
:func:`~pandas.read_clipboard`. Most package managers on Linux distributions will have ``xclip`` and/or ``xsel`` immediately available for installation.
* `pandas-gbq <https://pandas-gbq.readthedocs.io/en/latest/install.html#dependencies>`__: for Google BigQuery I/O.
* `pandas-gbq
<https://pandas-gbq.readthedocs.io/en/latest/install.html#dependencies>`__:
for Google BigQuery I/O. (pandas-gbq >= 0.8.0)


* `Backports.lzma <https://pypi.org/project/backports.lzma/>`__: Only for Python 2, for writing to and/or reading from an xz compressed DataFrame in CSV; Python 3 support is built into the standard library.
Expand Down
9 changes: 8 additions & 1 deletion doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1580,12 +1580,19 @@ You can pass in a URL to a CSV file:
df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')

S3 URLs are handled as well:
S3 URLs are handled as well but require installing the `S3Fs
<https://pypi.org/project/s3fs/>`_ library:

.. code-block:: python

df = pd.read_csv('s3://pandas-test/tips.csv')

If your S3 bucket requires cedentials you will need to set them as environment
variables or in the ``~/.aws/credentials`` config file, refer to the `S3Fs
documentation on credentials
<https://s3fs.readthedocs.io/en/latest/#credentials>`_.



Writing out Data
''''''''''''''''
Expand Down
Loading
You are viewing a condensed version of this merge commit. You can view the full changes here.