-
-
Notifications
You must be signed in to change notification settings - Fork 281
chore(deps): update dependency langchain_core to v1.0.7 [security] #6599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
renovate
wants to merge
1
commit into
main
Choose a base branch
from
renovate/pypi-langchain_core-vulnerability
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+1
−1
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
✅
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.

This PR contains the following updates:
==1.0.5->==1.0.7GitHub Vulnerability Alerts
CVE-2025-65106
Context
A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in
ChatPromptTemplateand related prompt template classes.Templates allow attribute access (
.) and indexing ([]) but not method invocation (()).The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using
MessagesPlaceholderwith chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g.,__globals__) to reach sensitive data such as environment variables.The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.
Affected Components
langchain-corepackagetemplate_format="f-string") - Vulnerability fixedtemplate_format="mustache") - Defensive hardeningtemplate_format="jinja2") - Defensive hardeningImpact
Attackers who can control template strings (not just template variables) can:
__class__,__globals__)Attack Vectors
1. F-string Template Injection
Before Fix:
2. Mustache Template Injection
Before Fix:
3. Jinja2 Template Injection
Before Fix:
Root Cause
string.Formatter().parse()to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:{obj.__class__.__name__}or{obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with(), they do support[]indexing, which could allow traversal through dictionaries like__globals__to reach sensitive objects.getattr()as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objectsSandboxedEnvironmentblocks dunder attributes (e.g.,__class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objectspassed to templates.
Who Is Affected?
High Risk Scenarios
You are affected if your application:
Example vulnerable code:
Low/No Risk Scenarios
You are NOT affected if:
Example safe code:
The Fix
F-string Templates
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
{obj.attr},{obj[0]}, or{obj.__class__}{variable_name}Mustache Templates (Defensive Hardening)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
getattr()fallback with strict type checkingdict,list, andtupletypesJinja2 Templates (Defensive Hardening)
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
_RestrictedSandboxedEnvironmentthat blocks ALL attribute/method accessSecurityErroron any attribute access attemptImportant Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g.,
HumanMessage,AIMessage,ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.Remediation
Immediate Actions
langchain-coreBest Practices
HumanMessage,AIMessage, etc.) without templatesLangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates
CVE-2025-65106 / GHSA-6qv9-48xg-fc7f
More information
Details
Context
A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in
ChatPromptTemplateand related prompt template classes.Templates allow attribute access (
.) and indexing ([]) but not method invocation (()).The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using
MessagesPlaceholderwith chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g.,__globals__) to reach sensitive data such as environment variables.The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.
Affected Components
langchain-corepackagetemplate_format="f-string") - Vulnerability fixedtemplate_format="mustache") - Defensive hardeningtemplate_format="jinja2") - Defensive hardeningImpact
Attackers who can control template strings (not just template variables) can:
__class__,__globals__)Attack Vectors
1. F-string Template Injection
Before Fix:
2. Mustache Template Injection
Before Fix:
3. Jinja2 Template Injection
Before Fix:
Root Cause
string.Formatter().parse()to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:{obj.__class__.__name__}or{obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with(), they do support[]indexing, which could allow traversal through dictionaries like__globals__to reach sensitive objects.getattr()as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objectsSandboxedEnvironmentblocks dunder attributes (e.g.,__class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objectspassed to templates.
Who Is Affected?
High Risk Scenarios
You are affected if your application:
Example vulnerable code:
Low/No Risk Scenarios
You are NOT affected if:
Example safe code:
The Fix
F-string Templates
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
{obj.attr},{obj[0]}, or{obj.__class__}{variable_name}Mustache Templates (Defensive Hardening)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
getattr()fallback with strict type checkingdict,list, andtupletypesJinja2 Templates (Defensive Hardening)
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
_RestrictedSandboxedEnvironmentthat blocks ALL attribute/method accessSecurityErroron any attribute access attemptImportant Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g.,
HumanMessage,AIMessage,ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.Remediation
Immediate Actions
langchain-coreBest Practices
HumanMessage,AIMessage, etc.) without templatesSeverity
CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:H/VI:L/VA:N/SC:N/SI:N/SA:NReferences
This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).
Configuration
📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.