bpo-46337: Urllib.parse scheme-specific behavior without reliance on URL scheme#30520
bpo-46337: Urllib.parse scheme-specific behavior without reliance on URL scheme#30520oldaccountdeadname wants to merge 23 commits intopython:mainfrom
Conversation
Some features in urllib are dependent on schemes, (i.e., preserving the netloc in url joining). Prior to this patch, this was governed by the uses_* lists (uses_relative, uses_netloc, uses_params) which hard code these attributes for certain schemes. Providing an enum interface and a 'constructor' that allows overrides makes this mechanism a bit more flexible for future modifications.
This allows the callers of urljoin and urlparse to add guaranteed scheme classes to the url regardless of the actual scheme, which may not be in the default uses_* lists of schemes. This call-time behavior is done through an optional parameter that preserves backwards compatibility. A test case is added for this, and requires the change present in test_urlparse.checkJoin.
627b732 to
53c6ccc
Compare
|
This PR is stale because it has been open for 30 days with no activity. |
MaxwellDupre
left a comment
There was a problem hiding this comment.
Could you add info to docs Please?
Otherwise how will users know how to use or that it exists?
|
Could you add info to docs Please?
Ah, definitely, I totally forgot to do that - I just pushed a draft of
some docs (e1082f8). Thanks for pointing that out!
|
This functionality was exposed in 53c6ccc.
e1082f8 to
f9b59dd
Compare
|
Ah, CI was failing due to my editor inserting tabs - should be fixed. The docs commit is now f9b59dd. |
It looks like CI expects this when building documentation.
|
@orsenthil - sorry for the extra email/ping, but would you be able to give this a review when you've got some spare time? It's been sitting stale for roughly a month now. thanks! |
|
@lincolnauster - sure, I will review. |
urljoin will not treat `..` as moving up one directory rather than moving up one file, thus causing the doctests to fail due to a missing trailing slash. Both changes are of the form: http://example.org/post/x -> http://example.org/post/x/ Additionally, the my-protocol example's expected output had the wrong scheme.
|
sigh - code was right, docs were wrong. Stupid question, but I couldn't find this in the devguide. How do I actually run the doctests on my local checkout? I was performing them manually in a shell but evidently I accidentally corrected them while typing them in. I'm running this in gmake doctest PYTHON=../python SPHINXOPTS="-j24 --keep-going" ...and I'm getting a ton of errors from code I haven't touched. (A bunch of stuff in the ast module, _tkinter wasn't found, etc.) Is this a misconfiguration on my part? How can I actually run doctests? Also, I pushed a fix for the current cause of failure. Would it be helpful to squash/rebase it into the original doc commit (f9b59dd) and force-push? Thanks so much! |
Minor style things: + _scheme_classes' docstring's summary was made explicit. + _scheme_classes was prepended with and followed by two newlines.
|
|
||
| .. versionadded:: 3.2 | ||
|
|
||
| Special URL Behaviors and Scheme Classes |
There was a problem hiding this comment.
The name "class" is confusing to me, because it makes me think of a class statement.
Also, why not just add three boolean parameters to urlparse and urljoin? That should make the behavior simpler to use for users.
There was a problem hiding this comment.
See also my alternate proposal for a dedicated function on the ticket.
(Fits with the API design guideline of avoiding a boolean param that changes behaviour — I forget the exact phrasing and origin of that)
There was a problem hiding this comment.
@JelleZijlstra Three boolean parameters would be a pain. re.match, re.search, etc., use an options parameter to collect all the flags in one.
There was a problem hiding this comment.
Why would they be a pain? As a user I'd much rather write use_relative=True than look up some separate flags enum.
There was a problem hiding this comment.
Likewise, I am not sure that a bitfield integer flag is the best thing for a regular Python API. The re module had a backward compat concern that doesn’t apply here. But (sorry to repeat my opinion) I am not in favour of piecemeal control, I would rather have a universal parsing function that just implements the universal resource identifier spec :)
There was a problem hiding this comment.
@merwok (It's not an integer, that would be IntFlag)
re the universal parsing function -- how would that differ from the proposed additions to urlparse.urlparse?
There was a problem hiding this comment.
See ticket:
urllib is a module that pre-dates the idea of universal parsing for URIs, where the delimiters (like ://) are enough to determine the parts of a URI and give them meaning (host, port, user, path, etc).
Backward compat for urllib is always a concern; someone said at the time that it could be good to have a new module for modern, generic parsing, but that hasn’t happened. Maybe a new parse function, or new parameter to the existing one, could be easier to add.
So I don’t want to have to pass parameters to request standard parsing for this or that component, I only want one function that forgets the special registries that urllib.parse relies on and just parses a well-formed URI into the universal components (scheme host path etc).
There was a problem hiding this comment.
So I don’t want to have to pass parameters to request standard parsing for this or that component, I only want one function that forgets the special registries that urllib.parse relies on and just parses a well-formed URI into the universal components (scheme host path etc).
With the currently pushed code, this would be a one line lambda:
from urllib.parse import SchemeFlag, urlparse
parse_standardized = lambda url: urlparse(url, flags=SchemeFlag.NETLOC | SchemeFlag.RELATIVE | SchemeFlag.PARAMS)Is this worth adding to the PR? I imagine it could be done more extremely (i.e., completely redefine the behavior of urlparse), but would backwards compat not be a concern?
There was a problem hiding this comment.
No, we should not redefine the behaviour of urlparse.
I was always talking about adding another function. Yes it can be a one-liner, but my point is that I don’t see the usefulness of having the separate flags to pick and choose parts of standard parsing.
There was a problem hiding this comment.
@merwok - added comments to issue; discussion here should be about this PR, discussion about competing PRs (even if the other PRs haven't been written yet ;) should be on the issue tracker.
| describe methods for URL resolution, usually by scheme. These resolution classes | ||
| determine, namely, whether a scheme supports, respectively, relative addressing, | ||
| preserving the netloc (domain name), and preserving the parameters.""" | ||
| SchemeClass = Enum('SchemeClass', 'RELATIVE NETLOC PARAMS') |
There was a problem hiding this comment.
Better to use a class statement so the docstring can actually be a docstring.
| del _fix_result_transcoding | ||
|
|
||
| def urlparse(url, scheme='', allow_fragments=True): | ||
| def urlparse(url, scheme='', allow_fragments=True, classes=set()): |
There was a problem hiding this comment.
Avoid mutable defaults, an empty iterable works well. (There are other cases of this throughout the diff.)
| def urlparse(url, scheme='', allow_fragments=True, classes=set()): | |
| def urlparse(url, scheme='', allow_fragments=True, classes=()): |
There was a problem hiding this comment.
@JelleZijlstra using SchemeFlag() creates an immutable default.
There was a problem hiding this comment.
I was referring to the original code that used sets, not to your flags suggestion.
ethanfurman
left a comment
There was a problem hiding this comment.
I think SchemeFlag works better than SchemeClass. Either way, use an enum.Flag for it, and consider using __repr__ similar to the one in re.RegexFlag.
| "parse_qsl", "quote", "quote_plus", "quote_from_bytes", | ||
| "unquote", "unquote_plus", "unquote_to_bytes", | ||
| "DefragResult", "ParseResult", "SplitResult", | ||
| "DefragResult", "ParseResult", "SplitResult", "SchemeClass", |
There was a problem hiding this comment.
Following the example of re.RegexFlag, name this SchemeFlag. Also, use enum.Flag instead of enum.Enum.
| describe methods for URL resolution, usually by scheme. These resolution classes | ||
| determine, namely, whether a scheme supports, respectively, relative addressing, | ||
| preserving the netloc (domain name), and preserving the parameters.""" | ||
| SchemeClass = Enum('SchemeClass', 'RELATIVE NETLOC PARAMS') |
| 'mms', 'sftp', 'tel'] | ||
|
|
||
|
|
||
| def _scheme_classes(scheme, overrides=set()): |
There was a problem hiding this comment.
overrides should be options or flags. (re.search uses flags)
| splitresult = urlsplit(url, scheme, allow_fragments) | ||
| scheme, netloc, url, query, fragment = splitresult | ||
| if scheme in uses_params and ';' in url: | ||
| scheme_classes = _scheme_classes(scheme, overrides=classes) |
There was a problem hiding this comment.
_scheme_classes(scheme, overrides=classes) --> _scheme_classes(scheme, flags)
| parameter. | ||
|
|
||
| """ | ||
| scheme_classes = set(overrides) |
| scheme_classes = set(overrides) | ||
|
|
||
| if scheme in uses_relative: | ||
| scheme_classes.add(SchemeClass.RELATIVE) |
There was a problem hiding this comment.
options |= SchemeFlag.RELATIVE
same transformation for the next two branches
| if scheme in uses_params: | ||
| scheme_classes.add(SchemeClass.PARAMS) | ||
|
|
||
| return scheme_classes |
|
A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated. Once you have made the requested changes, please leave a comment on this pull request containing the phrase |
|
I have made the requested changes; please review again. Again, thanks so much for the thorough review, and please tell me if there's anything missing! |
|
Thanks for making the requested changes! @ethanfurman: please review the changes made to this pull request. |
|
(copying from #90495) - we can continue discussion here as lot of context is preserved here. Hi @lincolnauster , I was -1 and was thinking much on introducing a flag with the enum in the parse module. This API signature is going to confuse people and will be huge blocker for further adoption and change (even if the default arguments are specified). I was thinking how best to mitigate that.
If design design required, we can bring it to a wider forum too. |
|
Hi @lincolnauster , I was -1 and was thinking much on introducing a
flag with the enum in the parse module.
urlparse(url, scheme='', allow_fragments=True, flags=SchemeFlag(0)):
This API signature is going to confuse people and will be huge blocker
for further adoption and change (even if the default arguments are
specified). I was thinking how best to mitigate that.
Tbh, I'm not too bothered by the additional complexity in the function
signature. As I see it, the complexity is already present in the
module's uses_* lists. If those lists align with your goals, you don't
need to care about them, but if they don't, you need to deal with that
implicit complexity. Documenting it and making it explicit feels like
the best path to take, imo.
1 Did we ever consider not going to flag as as parameter?
There were a few alternatives considered in this PR/issue and a few more
a few years ago. Correct me if I'm wrong, but I'm not sure anyone is
really in favor of the current scheme-based parsing. That said,
backwards compat is valuable, so it looks like we're trying to find some
way to augment the current parse-behavior selection with a solution that
doesn't involve magic strings.
Currently, client code can modify the lists that determine the behavior
for each scheme, but that's got two problems, as I understand:
1. It's a bag of global state. Who owns what data? What if there are
conflicting modifications? This is a pretty good way to cause messy
problems.
2. What if we don't know the schemes we're parsing in advance? We'd
need a *lot* of interactions with the messy global state, and thus
potentially cause a lot of problems.
One proposed solution was formalizing that global state into a global
registry, which would reduce the impact of the above problems, but not
fully solve them. It's certainly a simpler solution, but it also feels
harder to use in non-trivial cases. Some form of per-parse/join context
seems to be required to address the above issues and keep compat.
2 Any other default for the flag rather than an enum?
I'm not sure what you mean by that. Could you explain further?
If design design required, we can bring it to a wider forum too.
This seems like a good idea if necessary. Thanks for all the thoughts!
|
gpshead
left a comment
There was a problem hiding this comment.
I agree with Senthil about the API growing something unobvious. If we add a parameter, it should be keyword only and well named to indicate that it is unusual to provide it. I'd also set its default value to None rather than introducing the casual user to a new concept (SchemeFlag).
The name flags= is probably not sufficient. Something more wordy like behavior_overrides= would communicate better that these are not necessary in most cases as reasonable default behaviors exist. Similarly the name SchemeFlags doesn't necessarily communicate the right thing when seen in code as it technically has nothing to do with a scheme itself. Perhaps something like ParseBehaviors?
| ParseResult(scheme='http', netloc='docs.python.org:80', | ||
| path='/3/library/urllib.parse.html', params='', | ||
| query='highlight=params', fragment='url-parsing') | ||
| ParseResult(scheme='http', netloc='docs.python.org:80', path='/3/library/urllib.parse.html', params='', query='highlight=params', fragment='url-parsing') |
There was a problem hiding this comment.
do not reformat doctest examples. these were formatted to be narrow to avoid horizontal scrollbars in documentation on most common displays and to keep the .rst itself <80 columns when possible.
reformatting is unrelated to the change at hand and distracts from the actual change.
There was a problem hiding this comment.
Restoring the original formatting causes the doctest to fail, I should've broken that out into a separate and clear commit... I'm doctesting with .python -m doctest. Is that wrong, or is there some other way I can keep the old linebreaks?
There was a problem hiding this comment.
@gpshead re behavior_overrides vs flags: aren't flags usually behavior overrides? ssl, socket, _pydecimal, _osx_support, and re all use flags, while doctest uses compileflags, _pyio use dec_flags, and subprocess uses creationflags.
My first choice here would be a simple flags, and it should be easily understood that the flags given will modify the parsing behavior of urlparse. Would it be more precise to call it uri_flags? At any rate, behavior_overrides is no less generic and much more verbose than flags.
|
|
||
|
|
||
| .. function:: urljoin(base, url, allow_fragments=True) | ||
| .. function:: urljoin(base, url, allow_fragments=True, classes=SchemeFlag(0)) |
There was a problem hiding this comment.
Why is this called classes here and flags above? Consistency is important. These should also be made keyword only arguments.
There was a problem hiding this comment.
I forgot to update after renaming from classes to flags. I'll change it over to behavior_overrides.
| "parse_qsl", "quote", "quote_plus", "quote_from_bytes", | ||
| "unquote", "unquote_plus", "unquote_to_bytes", | ||
| "DefragResult", "ParseResult", "SplitResult", | ||
| "SchemeFlag", "RELATIVE", "NETLOC", "PARAMS", "UNIVERSAL", |
There was a problem hiding this comment.
Lets not pollute __all__ with the CONSTANT_NAMES. People shouldn't really use from urllib.parse import * but if they do they shouldn't get these, just SchemeFlag.
There was a problem hiding this comment.
@ethanfurman thoughts? I remember you suggested that these should be exported as such for code like
urlparse(uri_string, flags=UNIVERSAL)or similar. I'm fine either way, but do agree that the namespace would be cleaner were the flags not exported individually.
There was a problem hiding this comment.
Putting them in globals() is not for the from ... import * case, since, as @gpshead said, folks should not be doing that; putting them in globals() is to enable urlparse.RELATIVE usage, much like we have re.IGNORECASE and not re.RegexFlag.IGNORECASE.
| self.checkJoin('svn+ssh://pathtorepo/dir1', 'dir2', 'svn+ssh://pathtorepo/dir2') | ||
| self.checkJoin('ws://a/b','g','ws://a/g') | ||
| self.checkJoin('wss://a/b','g','wss://a/g') | ||
| self.checkJoin( |
There was a problem hiding this comment.
make these new SchemeFlag specific test methods instead of appending to an existing long one.
| self.checkJoin( | ||
| 'nonsensebase://net.loc/url/', '..', | ||
| 'nonsensebase://net.loc/', | ||
| flags=(urllib.parse.SchemeFlag.RELATIVE | urllib.parse.SchemeFlag.NETLOC), |
There was a problem hiding this comment.
add more test cases that explicitly cover PARAMS and UNIVERSAL behaviors.
| del _fix_result_transcoding | ||
|
|
||
| def urlparse(url, scheme='', allow_fragments=True): | ||
| def urlparse(url, scheme='', allow_fragments=True, flags=SchemeFlag(0)): |
There was a problem hiding this comment.
@gpshead Somewhere I thought you said it should be flags=None instead -- I agree.
There was a problem hiding this comment.
None is used - 677ed1aac3.
| """Join a base URL and a possibly relative URL to form an absolute | ||
| interpretation of the latter.""" | ||
| interpretation of the latter. Some logic may be enabled by setting | ||
| the classes variable.""" |
There was a problem hiding this comment.
I also suggest making the new parameter be keyword only.
|
A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated. Once you have made the requested changes, please leave a comment on this pull request containing the phrase |
|
Hi, sorry for getting back to this late, and thanks for all the feedback.
Agreed, I'll push these API clarifications shortly! |
|
Okay, I finally had some time to do a thorough review of the module and, as much as I love enums, I don't think this is the right solution to this problem:
I like the attempt to unify the three class lists, but I don't think this approach is the best choice for the Given that we already have an @lincolnauster Are you interested in converting this PR to one of the two above choices? |
|
Hi, very sorry for the very late reply - a number of things just stacked
up for the past few months.
Okay, I finally had some time to do a thorough review of the module
and, as much as I love enums, I don't think this is the right solution
to this problem:
* allow_fragments should be part of the enum, but that would just be
confusing at this point
Agreed in hindsight. There are too many flags for a flag to be
comprehensible.
* specifying NETLOC or RELATIVE to urlparse() does nothing
Yeah, the parse/join interface is heterogeneous. Definitely an oversight
in the proposed patches as-is (and an oversight in the scheme lists
being part of the public API).
I like the attempt to unify the three class lists, but I don't think
this approach is the best choice for the urlparse module as it exists.
Given that we already have an allow_fragments flag, I think the best
path forward is to add an allow_params flag, with a default of
False...
What about joining? I believe that when given a string, urljoin will
urlparse it. How do we determine the keyword arguments for urlparse in
that case? I assume we use additional parameters. What about netloc and
relative joining? I believe that gives us this rather unwieldy
signature:
```python
def urljoin(base, url, allow_fragments=True, allow_params=False, allow_netloc=False, allow_relative=False)
```
Maybe it could be written as the following:
```python
def urljoin(base, url, **kwargs)
```
...where kwargs are passed to urljoin, but I'm still smelling something
off here. I'd expect that stacking overrides on overrides on global
state is going to get us unpredictable and untestable behavior, no
matter how we write out the signature.
-- or add a separate universal parsing function, as has been suggested
earlier.
I think I agree on this - universal parsing and joining is probably a
good thing.
Would it be acceptable to build off the code currently in this PR? That
is, simply define some new universal parse and universal join lambdas as
proposed above[0], but without exposing the parse flags to the public
API? It's kind of messy, but at least it partially moves the mess from
the public API to the private implementation.
[0]: #30520 (comment)
Thanks so much for the detailed discussion, and, again, sorry for such a
late response :)
|
|
CI looks like it's running, it's red only because there's a requested
change.
I believe there was an issue with the previous push during the check of
generated files:
https://github.com/python/cpython/runs/5747980058
|
|
Hi, very sorry for the very late reply - a number of things just stacked
up for the past few months.
Okay, I finally had some time to do a thorough review of the module
and, as much as I love enums, I don't think this is the right solution
to this problem:
* allow_fragments should be part of the enum, but that would just be
confusing at this point
Agreed in hindsight. There are too many flags for a flag to be
comprehensible.
* specifying NETLOC or RELATIVE to urlparse() does nothing
Yeah, the parse/join interface is heterogeneous. Definitely an oversight
in the proposed patches as-is (and an oversight in the scheme lists
being part of the public API).
I like the attempt to unify the three class lists, but I don't think
this approach is the best choice for the urlparse module as it exists.
Given that we already have an allow_fragments flag, I think the best
path forward is to add an allow_params flag, with a default of
False...
What about joining? I believe that when given a string, urljoin will
urlparse it. How do we determine the keyword arguments for urlparse in
that case? I assume we use additional parameters. What about netloc and
relative joining? I believe that gives us this rather unwieldy
signature:
```python
def urljoin(base, url, allow_fragments=True, allow_params=False, allow_netloc=False, allow_relative=False)
```
Maybe it could be written as the following:
```python
def urljoin(base, url, **kwargs)
```
...where kwargs are passed to urljoin, but I'm still smelling something
off here. I'd expect that stacking overrides on overrides on global
state is going to get us unpredictable and untestable behavior, no
matter how we write out the signature.
-- or add a separate universal parsing function, as has been suggested
earlier.
I think I agree on this - universal parsing and joining is probably a
good thing.
Would it be acceptable to build off the code currently in this PR? That
is, simply define some new universal parse and universal join lambdas as
proposed above[0], but without exposing the parse flags to the public
API? It's kind of messy, but at least it partially moves the mess from
the public API to the private implementation.
[0]: #30520 (comment)
Thanks so much for the detailed discussion, and, again, sorry for such a
late response :)
|
|
I think this change is stale now. We really do not want to introduce additional complexity to the parsing here, at least many changes in one go. I am closing this request, and we can reopen it to add individual changes (minor refactors in a planned manner) if we want a generic solution. Current diff can introduce complexity that many reviewers have observed above. Thank you for your contribution. |
See bpo-46337. Basically, this allows code like this:
https://bugs.python.org/issue46337