-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
reverseproxy: Add duration/latency placeholders (close #4012) #4013
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Adds 4 placeholders, one is actually outside reverse proxy though:
{http.request.duration} is how long since the server decoded the HTTP request (headers).
{http.reverse_proxy.upstream.latency} is how long it took a proxy upstream to write the response header.
{http.reverse_proxy.upstream.duration} is total time proxying to the upstream, including writing response body to client.
{http.reverse_proxy.duration} is total time spent proxying, including selecting an upstream and retries.
Obviously, most of these are only useful at the end of a request, like when writing response headers or logs.
See also: https://caddy.community/t/any-equivalent-of-request-time-and-upstream-header-time-from-nginx/11418
|
Should add these the godocs, right? |
|
Yeah, good point. Doing that now. |
|
Alright this is looking great, given this caddy.json I've successfully received Note that What if |
|
@MarioIshac Great, thanks for trying it out.
I'm not sure that's a reasonable assumption we can make: AFAIK there's no reason a backend can't start sending the response before it has finished reading the request, and in fact I'm pretty sure quite a few servers work this way (and has been the cause of some bugs in the Go standard library in the past).
I think that's actually good information from these placeholders -- the small difference makes it easy to show that the client uploading is the bulk of the request time, but this deduction requires knowing how your backend works. A backend will either wait to download the entire body before writing a response, or it will start writing a response before the body has been fully read. |
|
I figure we can merge this? |
|
Is the implementation of |
|
Currently using the build artifact from this PR in prod, been working well. Let me know if there's anything more I can do to help get this merged, but don't have the expertise to comment on the implementation itself though. |
|
I think the only open question was whether we agree how Thanks! |
|
Matt's answer makes sense to me, especially after I looked into what types of servers would send response back before request is complete (sending "unauthorized" instead of waiting for file upload to complete, for example). Because this is an implementation detail of the backend, the backend can send the time it received the last byte of the request if desired outside of Caddy. |
|
@francislavoie Hmm, yes I believe it does. Between these 4 placeholders, they should be able to get the info they need. Thanks for linking that up. I'll go ahead and merge this then. Thank you both! |
Adds 4 placeholders, one is actually outside reverse proxy though:
{http.request.duration}is how long since the server decoded the HTTP request (headers).{http.reverse_proxy.upstream.latency}is how long it took a proxy upstream to write the response header.{http.reverse_proxy.upstream.duration}is total time proxying to the upstream, including writing response body to client.{http.reverse_proxy.duration}is total time spent proxying, including selecting an upstream and retries.Obviously, most of these are only useful at the end of a request, like when writing response headers or logs.
See also: https://caddy.community/t/any-equivalent-of-request-time-and-upstream-header-time-from-nginx/11418
Closes #4012