DOC-522: Fix non-terminating traffic policy in robots.txt example#2055
DOC-522: Fix non-terminating traffic policy in robots.txt example#2055
Conversation
Co-authored-by: sg-writer <212444234+sg-writer@users.noreply.github.com>
Style linting summaryVale found no style issues in this PR. Great job! |
|
@copilot update your pr description to reflect the changes I just pushed |
|
@sg-writer The model You can try again without specifying a model (just If you want to contact GitHub about this error, please mention the following identifier so they can better serve you: |
|
@copilot see above |
Updated PR description: The robots.txt traffic policy example included a Changes
Result: All robots.txt examples now use terminating Files Modified
Before: on_http_request:
- expressions:
- "!req.url.path.contains('/robots.txt')"
actions:
- type: forward-internal # non-terminating
config:
url: <Internal endpoint URL Here>
- expressions:
- "req.url.path.contains('/robots.txt')"
actions:
- type: custom-response
config:
body: "User-agent: *\r\nDisallow: /"
headers:
content-type: text/plain
status_code: 200After: on_http_request:
- expressions:
- "req.url.path.contains('/robots.txt')"
actions:
- type: custom-response # terminating
config:
body: "User-agent: *\r\nDisallow: /"
headers:
content-type: text/plain
status_code: 200Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
There was a problem hiding this comment.
Pull request overview
Updates Traffic Policy documentation examples to avoid a non-terminating forward-internal action in the robots.txt pattern, ensuring only robots.txt requests receive the custom response while all other traffic proceeds normally.
Changes:
- Removed the
forward-internalrule from the “deny bots/crawlers via robots.txt” example (YAML + JSON). - Standardized the robots matcher to
req.url.path.contains('/robots.txt')in the “specific bots/crawlers” examples. - Kept the examples focused on terminating
custom-responsebehavior forrobots.txt.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| traffic-policy/examples/block-unwanted-requests.mdx | Simplifies robots.txt example to only intercept /robots.txt; updates matcher in the “specific bots” variant. |
| snippets/traffic-policy/gallery/AddRobotsTxt.mdx | Removes non-terminating forward rule from the standalone robots.txt snippet (YAML + JSON). |
| snippets/traffic-policy/gallery/AddRobotsTxtSpecific.mdx | Updates matcher to use req.url.path.contains('/robots.txt') in YAML + JSON snippet. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
S3Prototype
left a comment
There was a problem hiding this comment.
Need to double check with the team here, maybe ask Jen or Michael latouf. According to docs forward-internal is a terminating action
|
Pretty sure this was actually already resolved whoops! 😬 |
The robots.txt traffic policy example included a
forward-internalrule which is non-terminating, causing policy evaluation to continue after forwarding non-robots.txt requests — incorrect behavior.Changes
forward-internalrule from the "deny bots/crawlers via robots.txt" example. Non-robots.txt requests naturally pass through to the upstream; no explicit forwarding rule is needed.req.url.path.contains('/robots.txt')consistently across all robots.txt examples for consistency.custom-responseactions with consistent expression syntax.Before:
After:
Updated in
traffic-policy/examples/block-unwanted-requests.mdx,snippets/traffic-policy/gallery/AddRobotsTxt.mdx, andsnippets/traffic-policy/gallery/AddRobotsTxtSpecific.mdx.🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.