Skip to content

refactor(mcp): describegpt llm passthru tweaks#3585

Merged
jqnatividad merged 14 commits intomasterfrom
mcp-describegpt-llm-passthru-tweaks
Mar 7, 2026
Merged

refactor(mcp): describegpt llm passthru tweaks#3585
jqnatividad merged 14 commits intomasterfrom
mcp-describegpt-llm-passthru-tweaks

Conversation

@jqnatividad
Copy link
Collaborator

No description provided.

jqnatividad and others added 10 commits March 6, 2026 09:53
… MCP mode

- Pass `server` through `handleGenericCommand` so describegpt interception works
- Block `--prompt` (SQL RAG mode) in MCP server mode with helpful error message
- Return explicit error when MCP client lacks sampling support (no silent fallthrough)
- Update describegpt guidance to reflect MCP-only behavior
- Add per-command skip list in mcp_skills_gen.rs to strip MCP-irrelevant options
  from describegpt skill JSON (--prompt, --base-url, --model, --sql-results, etc.)
- Keep --prepare-context and --process-response (needed for MCP sampling)
- Regenerate skill JSON and help markdown

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…terception

Change `if (params.prompt)` to `if (params.prompt !== undefined)` so that
an empty string value doesn't bypass the MCP server mode block and fall
through to the sampling path.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…fix MCP acronym

- Add "MCP" to ACRONYMS in help_markdown_gen.rs so heading renders as
  "MCP Sampling Options" instead of "Mcp Sampling Options"
- Skip examples generation for describegpt skill JSON (CLI examples
  reference MCP-stripped options like --api-key, --base-url, --prompt)
- Add --prepare-context and --process-response back to skip list (internal
  to MCP sampling orchestration, not user-facing)
- Block LLM API params (base_url, api_key, model, max_tokens) in
  describegpt MCP intercept for defense-in-depth
- Regenerate skill JSON and help markdown

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add describegpt to COMMON_COMMANDS for a dedicated qsv_describegpt
  tool with properly typed params from skill JSON (no -- prefix issues)
- Parse _llm_responses as JSON string or array for agent-as-LLM fallback
  (fixes "llmResponses.find is not a function" error)
- Fix isHelpRequest, --prompt block, and blocked LLM param checks to
  handle -- prefixed keys from qsv_command generic tool path
- Add early inference option validation (--dictionary/--description/
  --tags/--all required) with helpful error message
- Move inference check after _llm_responses so Phase 3 callbacks work
- Update COMMAND_GUIDANCE to document two-call agent-as-LLM pattern
- Extract prepareContextForAgent and processAgentResponses into
  standalone functions for the agent-as-LLM fallback flow
- Update test counts from 11 to 12 for COMMON_COMMANDS

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Validate JSON.parse result of _llm_responses is an array (not just valid JSON)
- Extract shared DescribegptPrepareOutput interface to eliminate inline type
  duplication between prepareContextForAgent and processAgentResponses
- Normalize inference option keys using same --/hyphen logic as buildDescribegptArgs
  for consistent parameter handling regardless of key format

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Export runQsvCapture and ensure describegpt outputs are persisted to files. Changes: added export for runQsvCapture; added extname import; marked "describegpt" as non-tabular; if describegpt subprocess emits no stdout, return a message pointing to the --output path; prevent using temp files for describegpt; auto-generate an output filename alongside the input (.<input>.describegpt.md) when none is provided. These changes keep describegpt's Markdown outputs durable and make the runQsvCapture helper available for reuse.
After the Array.isArray check on _llm_responses (both parsed-from-string
and already-array paths), validate that the first element has the required
"kind" and "response" string fields. This catches malformed elements early
rather than silently producing undefined values downstream.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ases

- Extract `findOutputPath` and `describegptFallbackResult` helpers into utils.ts,
  replacing duplicated inline `find` callbacks in mcp-tools.ts and mcp-sampling.ts
- Handle `--output=path` format (single arg with =) in addition to `--output path`
- Add defensive `inputFile` guard before auto-generating output filename
- Provide clearer fallback message when no output path is found

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tion logic

Extract `validateLlmResponseElements` helper to eliminate duplicated shape-check
between the string-parse and already-array paths. Use a loop over all elements
instead of only checking the first, so malformed elements at any index are caught
with a precise error message including the index.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lidation

Add explicit null, primitive, and array checks before casting to
Record<string, unknown> in validateLlmResponseElements, preventing a
TypeError when null elements are encountered.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
jqnatividad and others added 2 commits March 7, 2026 07:57
Merge origin/master into mcp-describegpt-llm-passthru-tweaks.
Single conflict in mcp-tools.ts resolved by keeping the branch's
agent-as-LLM fallback, dash-param normalization, help request
handling, and _llm_responses validation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove double blank line at the merge seam in mcp-tools.ts
(between outputFile assignment and capabilities check).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Refactors the MCP integration around describegpt to better support MCP sampling and an “agent-as-LLM” fallback path, while tightening which CLI flags/examples are exposed in MCP-generated skill definitions and updating related docs/tests.

Changes:

  • Adjust MCP skill generation to omit describegpt examples and strip describegpt CLI flags that don’t apply in MCP mode.
  • Extend MCP server/tooling to treat describegpt as non-tabular output, add a fallback result message when stdout is empty, and add an agent-driven two-call flow when MCP sampling isn’t available.
  • Update docs/tests and the qsv-describegpt skill JSON to reflect MCP sampling options and the expanded COMMON_COMMANDS list.

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/mcp_skills_gen.rs Skips describegpt examples and filters MCP-irrelevant describegpt options during skill generation.
src/help_markdown_gen.rs Preserves MCP as an uppercase acronym in generated headings.
docs/help/describegpt.md Documents --prepare-context / --process-response under “MCP Sampling Options” and updates usage/nav.
.claude/skills/tests/mcp-server.test.ts Updates COMMON_COMMANDS count expectation to 12.
.claude/skills/tests/deferred-loading.test.ts Updates COMMON_COMMANDS count expectation to 12.
.claude/skills/src/utils.ts Adds findOutputPath + describegptFallbackResult helpers.
.claude/skills/src/mcp-tools.ts Adds describegpt passthrough logic, sampling/fallback orchestration, stricter param handling, and auto output file naming.
.claude/skills/src/mcp-server.ts Wires MCP server through to generic tool handling.
.claude/skills/src/mcp-sampling.ts Exports runQsvCapture and uses describegptFallbackResult when stdout is empty.
.claude/skills/qsv/qsv-describegpt.json Removes MCP-irrelevant options/examples from the describegpt skill definition.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.

Comments suppressed due to low confidence (1)

.claude/skills/src/mcp-tools.ts:2592

  • buildDescribegptArgs iterates over all params and will emit duplicate flags/values if the same option is provided in multiple key forms (e.g., "dictionary" and "--dictionary"). This can lead to ambiguous precedence when values differ; consider tracking seen normalized keys and choosing a single source of truth (e.g., prefer canonical keys over dashed keys).
  for (const [rawParam, value] of Object.entries(params)) {
    // Normalize: strip leading --, convert - to _
    const normalized = rawParam.replace(/^--/, "").replace(/-/g, "_");
    const flag = reverseLookup.get(normalized);
    if (!flag) continue;
    if (value === undefined || value === null || value === false) continue;
    if (value === true) {
      args.push(flag);
    } else {
      args.push(flag, String(value));
    }
  }

jqnatividad and others added 2 commits March 7, 2026 08:42
…te --help

- Type `phaseResponses` array as `PhaseResponse[]` to catch shape mismatches
  at compile time instead of silently producing `any[]`
- Strip `--help` from params after reading it to prevent duplicate flag
  forwarding alongside `options.help=true`

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move the PhaseResponse interface from inside processAgentResponses to
module scope alongside related describegpt types for consistency and
discoverability.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@jqnatividad jqnatividad merged commit 4c51d4c into master Mar 7, 2026
25 checks passed
@jqnatividad jqnatividad deleted the mcp-describegpt-llm-passthru-tweaks branch March 7, 2026 13:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants