-
Notifications
You must be signed in to change notification settings - Fork 0
refactor: Incorporate prisma-crud repository into baseplate repository #645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: Incorporate prisma-crud repository into baseplate repository #645
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedUse the following commands to manage reviews:
WalkthroughAdds dynamic example discovery to CI and introduces a new workflow to verify example sync. Adds a complete “blog-with-auth” backend example (Fastify, GraphQL, Prisma, auth, sessions, queues) with schema, services, plugins, tests, and tooling. Adds a “todo-with-auth0” example scaffold. Adds a changeset noting unified project resolver support in the CLI. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client
participant F as Fastify (graphql route)
participant Yoga as GraphQL Yoga
participant Res as Resolver (Pothos)
participant UPS as user-password.service
participant PR as Prisma
participant USS as user-session.service
participant CK as Cookie
C->>F: POST /graphql (loginWithEmailPassword)
F->>Yoga: forward request/response
Yoga->>Res: execute mutation
Res->>UPS: authenticateUserWithEmailAndPassword(input, context)
UPS->>PR: find user/account by email/provider
UPS->>UPS: verify password (argon2)
UPS->>USS: createSession(userId, context)
USS->>PR: insert session, read roles
USS->>CK: set signed session cookie
USS-->>UPS: { userId, expiresAt, roles }
UPS-->>Res: { session }
Res-->>Yoga: payload
Yoga-->>F: HTTP response
F-->>C: 200 OK + Set-Cookie
sequenceDiagram
autonumber
participant C as Client
participant F as Fastify
participant AP as authPlugin (onRequest)
participant USS as user-session.service
participant RC as request-context
participant GQL as GraphQL handler
C->>F: Any request
F->>AP: onRequest hook
AP->>USS: getSessionInfoFromRequest(req, reply?)
USS->>RC: set userId in request-context
AP->>AP: createAuthContextFromSessionInfo
AP-->>F: req.auth attached
F->>GQL: handle /graphql with context.auth
GQL-->>C: Response
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
|
@coderabbitai pause |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub. |
🦋 Changeset detectedLatest commit: d9066b3 The changes in this PR will be included in the next version bump. This PR includes changesets to release 18 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 60
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
examples/todo-with-auth0/.gitignore (1)
139-141: Movebaseplate/.buildignore to the repository rootAfter running the search and checking for the example-local directory, it’s clear that the ignore rule in
examples/todo-with-auth0/.gitignoredoesn’t match anything there (noexamples/todo-with-auth0/baseplate/.buildfolder exists), and the root.gitignoreis currently missing this entry.• Remove the misplaced rule from the example’s .gitignore
• Add the rule to the repository’s top-level.gitignoreso it correctly ignoresbaseplate/.buildat the repo root
• No changes needed inexamples/blog-with-auth/.gitignoreorpackages/create-project/templates/.gitignore—those contexts include a localbaseplatedirectory where the rule belongsApply in
examples/todo-with-auth0/.gitignore:- # Baseplate build artifacts - baseplate/.buildThen add to the project root
.gitignore(at any logical grouping, e.g., under “# Baseplate build artifacts”):+ # Baseplate build artifacts + baseplate/.build
| - name: List example directories | ||
| id: list-examples | ||
| run: | | ||
| # List directories in examples folder and format as JSON array | ||
| cd examples | ||
| directories=($(find . -maxdepth 1 -type d -not -name "." | sed 's|./||' | sort)) | ||
|
|
||
| # Create JSON array | ||
| json_array=() | ||
| for dir in "${directories[@]}"; do | ||
| json_array+=("\"$dir\"") | ||
| done | ||
|
|
||
| # Join array elements with commas | ||
| json_elements=$(IFS=,; echo "${json_array[*]}") | ||
| json_output="[${json_elements}]" | ||
|
|
||
| echo "Found examples: $json_output" | ||
| echo "examples=$json_output" >> "$GITHUB_OUTPUT" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make example discovery robust and shellcheck-clean
- Avoid word-splitting (SC2207) by using
mapfile. - Handle absent or empty
examples/gracefully. - Emit JSON reliably even when names have spaces.
- name: List example directories
id: list-examples
run: |
- # List directories in examples folder and format as JSON array
- cd examples
- directories=($(find . -maxdepth 1 -type d -not -name "." | sed 's|./||' | sort))
-
- # Create JSON array
- json_array=()
- for dir in "${directories[@]}"; do
- json_array+=("\"$dir\"")
- done
-
- # Join array elements with commas
- json_elements=$(IFS=,; echo "${json_array[*]}")
- json_output="[${json_elements}]"
-
- echo "Found examples: $json_output"
- echo "examples=$json_output" >> "$GITHUB_OUTPUT"
+ set -euo pipefail
+ if [ ! -d examples ]; then
+ echo "No examples/ directory found; emitting empty list"
+ echo 'examples=[]' >> "$GITHUB_OUTPUT"
+ exit 0
+ fi
+ cd examples
+ mapfile -t directories < <(find . -maxdepth 1 -type d -not -name "." -printf '%f\n' | sort)
+ if [ "${#directories[@]}" -eq 0 ]; then
+ echo 'examples=[]' >> "$GITHUB_OUTPUT"
+ exit 0
+ fi
+ # Build JSON safely
+ json_elements=$(printf '"%s",' "${directories[@]}")
+ json_output="[${json_elements%,}]"
+ echo "Found examples: $json_output"
+ echo "examples=$json_output" >> "$GITHUB_OUTPUT"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: List example directories | |
| id: list-examples | |
| run: | | |
| # List directories in examples folder and format as JSON array | |
| cd examples | |
| directories=($(find . -maxdepth 1 -type d -not -name "." | sed 's|./||' | sort)) | |
| # Create JSON array | |
| json_array=() | |
| for dir in "${directories[@]}"; do | |
| json_array+=("\"$dir\"") | |
| done | |
| # Join array elements with commas | |
| json_elements=$(IFS=,; echo "${json_array[*]}") | |
| json_output="[${json_elements}]" | |
| echo "Found examples: $json_output" | |
| echo "examples=$json_output" >> "$GITHUB_OUTPUT" | |
| - name: List example directories | |
| id: list-examples | |
| run: | | |
| set -euo pipefail | |
| if [ ! -d examples ]; then | |
| echo "No examples/ directory found; emitting empty list" | |
| echo 'examples=[]' >> "$GITHUB_OUTPUT" | |
| exit 0 | |
| fi | |
| cd examples | |
| mapfile -t directories < <(find . -maxdepth 1 -type d -not -name "." -printf '%f\n' | sort) | |
| if [ "${#directories[@]}" -eq 0 ]; then | |
| echo 'examples=[]' >> "$GITHUB_OUTPUT" | |
| exit 0 | |
| fi | |
| # Build JSON safely | |
| json_elements=$(printf '"%s",' "${directories[@]}") | |
| json_output="[${json_elements%,}]" | |
| echo "Found examples: $json_output" | |
| echo "examples=$json_output" >> "$GITHUB_OUTPUT" |
🧰 Tools
🪛 actionlint (1.7.7)
33-33: shellcheck reported issue in this script: SC2207:warning:3:14: Prefer mapfile or read -a to split command output (or quote to avoid splitting)
(shellcheck)
🤖 Prompt for AI Agents
.github/workflows/verify-examples-sync.yml lines 31-50: the current
example-discovery step uses word-splitting prone arrays and fails on empty
folders or names with spaces; replace it with a shellcheck-friendly approach: cd
into examples, use mapfile with a null delimiter to read find's -print0 output
(e.g. mapfile -d '' -t directories < <(find . -maxdepth 1 -type d -not -name "."
-print0 | sort -z)), handle the case where directories is empty and set
json_output="[]", and produce a safe JSON array from the entries (e.g. pass
names through printf '%s\n' and use jq -R -s -c 'split("\n")[:-1]' or another
robust JSON encoder) before echoing and writing examples to GITHUB_OUTPUT.
| - name: Run sync diff for example | ||
| run: | | ||
| echo "Verifying that example '${{ matrix.example }}' has no uncommitted generated code differences..." | ||
|
|
||
| # Set environment variables for project discovery | ||
| export INCLUDE_EXAMPLES=true | ||
|
|
||
| # Run diff command using project name | ||
| cd packages/project-builder-cli | ||
|
|
||
| # Use the built CLI to check for differences | ||
| pnpm start diff ${{ matrix.example }} --compact | ||
|
|
||
| if [ $? -eq 0 ]; then | ||
| echo "✅ Example '${{ matrix.example }}' is in sync with generators" | ||
| else | ||
| echo "❌ Example '${{ matrix.example }}' has differences between generated and committed code" | ||
| echo "" | ||
| echo "This means the example project is not in sync with the latest generator code." | ||
| echo "Please run 'pnpm start diff ${{ matrix.example }}' locally to see the differences" | ||
| echo "and either:" | ||
| echo " 1. Update the generators to match the current code, or" | ||
| echo " 2. Regenerate the example code with 'baseplate start sync ${{ matrix.example }}'" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Check command exit code directly and quote the example name (SC2181)
Use the command in the if condition; also quote ${{ matrix.example }}.
- pnpm start diff ${{ matrix.example }} --compact
-
- if [ $? -eq 0 ]; then
+ if pnpm start diff "${{ matrix.example }}" --compact; then
echo "✅ Example '${{ matrix.example }}' is in sync with generators"
else
echo "❌ Example '${{ matrix.example }}' has differences between generated and committed code"
echo ""
echo "This means the example project is not in sync with the latest generator code."
echo "Please run 'pnpm start diff ${{ matrix.example }}' locally to see the differences"
echo "and either:"
echo " 1. Update the generators to match the current code, or"
echo " 2. Regenerate the example code with 'baseplate start sync ${{ matrix.example }}'"
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Run sync diff for example | |
| run: | | |
| echo "Verifying that example '${{ matrix.example }}' has no uncommitted generated code differences..." | |
| # Set environment variables for project discovery | |
| export INCLUDE_EXAMPLES=true | |
| # Run diff command using project name | |
| cd packages/project-builder-cli | |
| # Use the built CLI to check for differences | |
| pnpm start diff ${{ matrix.example }} --compact | |
| if [ $? -eq 0 ]; then | |
| echo "✅ Example '${{ matrix.example }}' is in sync with generators" | |
| else | |
| echo "❌ Example '${{ matrix.example }}' has differences between generated and committed code" | |
| echo "" | |
| echo "This means the example project is not in sync with the latest generator code." | |
| echo "Please run 'pnpm start diff ${{ matrix.example }}' locally to see the differences" | |
| echo "and either:" | |
| echo " 1. Update the generators to match the current code, or" | |
| echo " 2. Regenerate the example code with 'baseplate start sync ${{ matrix.example }}'" | |
| exit 1 | |
| fi | |
| - name: Run sync diff for example | |
| run: | | |
| echo "Verifying that example '${{ matrix.example }}' has no uncommitted generated code differences..." | |
| # Set environment variables for project discovery | |
| export INCLUDE_EXAMPLES=true | |
| # Run diff command using project name | |
| cd packages/project-builder-cli | |
| # Use the built CLI to check for differences | |
| if pnpm start diff "${{ matrix.example }}" --compact; then | |
| echo "✅ Example '${{ matrix.example }}' is in sync with generators" | |
| else | |
| echo "❌ Example '${{ matrix.example }}' has differences between generated and committed code" | |
| echo "" | |
| echo "This means the example project is not in sync with the latest generator code." | |
| echo "Please run 'pnpm start diff ${{ matrix.example }}' locally to see the differences" | |
| echo "and either:" | |
| echo " 1. Update the generators to match the current code, or" | |
| echo " 2. Regenerate the example code with 'baseplate start sync ${{ matrix.example }}'" | |
| exit 1 | |
| fi |
🧰 Tools
🪛 actionlint (1.7.7)
93-93: shellcheck reported issue in this script: SC2181:style:12:6: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?
(shellcheck)
🤖 Prompt for AI Agents
.github/workflows/verify-examples-sync.yml around lines 92 to 116: the script
checks the previous command's exit code via "$?" and uses an unquoted
matrix.example in messages; change the shell to test the command directly in the
if condition (if pnpm start diff "${{ matrix.example }}" --compact; then ...)
and wrap all uses of the example name in double quotes ("${{ matrix.example }}")
to prevent word-splitting; ensure you also run the CLI from the correct
directory (cd packages/project-builder-cli) before the if test and keep the same
success/failure messages and exit 1 on failure.
| @@ -0,0 +1,111 @@ | |||
| -- CreateTable | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure gen_random_uuid() is available
gen_random_uuid() requires the pgcrypto extension. Without it, this migration fails on fresh databases.
Apply at the top of the file:
+-- Ensure gen_random_uuid() is available
+CREATE EXTENSION IF NOT EXISTS pgcrypto;🤖 Prompt for AI Agents
In
examples/blog-with-auth/packages/backend/prisma/migrations/20241031083623_initial_migration/migration.sql
around line 1, the migration uses gen_random_uuid() but does not ensure the
pgcrypto extension is enabled; update the migration by adding an initial
statement that creates the pgcrypto extension if it does not exist (so
gen_random_uuid() is available on fresh databases) and place it at the very top
of the file so subsequent CREATE TABLE statements can use gen_random_uuid().
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "user_auth_provider_provider_type_provider_id_key" ON "user_auth_provider"("provider_type", "provider_id"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "user_session_token_key" ON "user_session"("token"); | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "blog" ADD CONSTRAINT "blog_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "blog_post" ADD CONSTRAINT "blog_post_publisher_id_fkey" FOREIGN KEY ("publisher_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "blog_post" ADD CONSTRAINT "blog_post_blog_id_fkey" FOREIGN KEY ("blog_id") REFERENCES "blog"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "blog_user" ADD CONSTRAINT "blog_user_blog_id_fkey" FOREIGN KEY ("blog_id") REFERENCES "blog"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "blog_user" ADD CONSTRAINT "blog_user_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "user_auth_provider" ADD CONSTRAINT "user_auth_provider_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "user_role" ADD CONSTRAINT "user_role_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | ||
|
|
||
| -- AddForeignKey | ||
| ALTER TABLE "user_session" ADD CONSTRAINT "user_session_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add indexes for foreign keys to prevent table scans under load
Postgres doesn’t auto-index FKs. Given expected joins, add indexes on FK columns (and the non-leading column of composite PKs).
Append these statements:
+-- Performance indexes for FK lookups
+CREATE INDEX IF NOT EXISTS idx_blog_user_id ON "blog" ("user_id");
+CREATE INDEX IF NOT EXISTS idx_blog_post_blog_id ON "blog_post" ("blog_id");
+CREATE INDEX IF NOT EXISTS idx_blog_post_publisher_id ON "blog_post" ("publisher_id");
+CREATE INDEX IF NOT EXISTS idx_blog_user_user_id ON "blog_user" ("user_id");
+CREATE INDEX IF NOT EXISTS idx_user_role_user_id ON "user_role" ("user_id");
+CREATE INDEX IF NOT EXISTS idx_user_session_user_id ON "user_session" ("user_id");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- CreateIndex | |
| CREATE UNIQUE INDEX "user_auth_provider_provider_type_provider_id_key" ON "user_auth_provider"("provider_type", "provider_id"); | |
| -- CreateIndex | |
| CREATE UNIQUE INDEX "user_session_token_key" ON "user_session"("token"); | |
| -- AddForeignKey | |
| ALTER TABLE "blog" ADD CONSTRAINT "blog_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_post" ADD CONSTRAINT "blog_post_publisher_id_fkey" FOREIGN KEY ("publisher_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_post" ADD CONSTRAINT "blog_post_blog_id_fkey" FOREIGN KEY ("blog_id") REFERENCES "blog"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_user" ADD CONSTRAINT "blog_user_blog_id_fkey" FOREIGN KEY ("blog_id") REFERENCES "blog"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_user" ADD CONSTRAINT "blog_user_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "user_auth_provider" ADD CONSTRAINT "user_auth_provider_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "user_role" ADD CONSTRAINT "user_role_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "user_session" ADD CONSTRAINT "user_session_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- CreateIndex | |
| CREATE UNIQUE INDEX "user_auth_provider_provider_type_provider_id_key" ON "user_auth_provider"("provider_type", "provider_id"); | |
| -- CreateIndex | |
| CREATE UNIQUE INDEX "user_session_token_key" ON "user_session"("token"); | |
| -- AddForeignKey | |
| ALTER TABLE "blog" ADD CONSTRAINT "blog_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_post" ADD CONSTRAINT "blog_post_publisher_id_fkey" FOREIGN KEY ("publisher_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_post" ADD CONSTRAINT "blog_post_blog_id_fkey" FOREIGN KEY ("blog_id") REFERENCES "blog"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_user" ADD CONSTRAINT "blog_user_blog_id_fkey" FOREIGN KEY ("blog_id") REFERENCES "blog"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "blog_user" ADD CONSTRAINT "blog_user_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "user_auth_provider" ADD CONSTRAINT "user_auth_provider_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "user_role" ADD CONSTRAINT "user_role_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE "user_session" ADD CONSTRAINT "user_session_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "user"("id") ON DELETE CASCADE ON UPDATE RESTRICT; | |
| -- Performance indexes for FK lookups | |
| CREATE INDEX IF NOT EXISTS idx_blog_user_id ON "blog" ("user_id"); | |
| CREATE INDEX IF NOT EXISTS idx_blog_post_blog_id ON "blog_post" ("blog_id"); | |
| CREATE INDEX IF NOT EXISTS idx_blog_post_publisher_id ON "blog_post" ("publisher_id"); | |
| CREATE INDEX IF NOT EXISTS idx_blog_user_user_id ON "blog_user" ("user_id"); | |
| CREATE INDEX IF NOT EXISTS idx_user_role_user_id ON "user_role" ("user_id"); | |
| CREATE INDEX IF NOT EXISTS idx_user_session_user_id ON "user_session" ("user_id"); |
🤖 Prompt for AI Agents
In
examples/blog-with-auth/packages/backend/prisma/migrations/20241031083623_initial_migration/migration.sql
around lines 83-111, add non-unique indexes on all foreign-key columns to avoid
table scans under load: create indexes on blog(user_id),
blog_post(publisher_id), blog_post(blog_id), blog_user(blog_id),
blog_user(user_id), user_auth_provider(user_id), user_role(user_id), and
user_session(user_id). Also add indexes for any non-leading column(s) of
composite primary/unique keys used in joins (e.g., if a composite key exists
where the second column is commonly used in queries, create an index on that
second column). Ensure index names are unique and follow the existing naming
convention.
| -- AlterTable | ||
| ALTER TABLE "user" ADD COLUMN "email_verified" BOOLEAN NOT NULL DEFAULT false, | ||
| ADD COLUMN "name" TEXT; | ||
|
|
||
| -- DropTable | ||
| DROP TABLE "user_auth_provider"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer renaming is_email_verified instead of introducing email_verified
If the older column exists from the 2024 migration, adding a second boolean (email_verified) creates duplicate semantics and possible drift. Safer migration:
--- a/examples/blog-with-auth/packages/backend/prisma/migrations/20250709120734_initial_migration/migration.sql
-ALTER TABLE "user" ADD COLUMN "email_verified" BOOLEAN NOT NULL DEFAULT false,
-ADD COLUMN "name" TEXT;
+-- If "is_email_verified" exists, rename it to keep data
+DO $$
+BEGIN
+ IF EXISTS (
+ SELECT 1 FROM information_schema.columns
+ WHERE table_name = 'user' AND column_name = 'is_email_verified'
+ ) THEN
+ EXECUTE 'ALTER TABLE "user" RENAME COLUMN "is_email_verified" TO "email_verified"';
+ ELSE
+ EXECUTE 'ALTER TABLE "user" ADD COLUMN "email_verified" BOOLEAN NOT NULL DEFAULT false';
+ END IF;
+END$$;
+ALTER TABLE "user" ADD COLUMN "name" TEXT;Alternatively, add a small follow-up migration that performs the rename.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- AlterTable | |
| ALTER TABLE "user" ADD COLUMN "email_verified" BOOLEAN NOT NULL DEFAULT false, | |
| ADD COLUMN "name" TEXT; | |
| -- DropTable | |
| DROP TABLE "user_auth_provider"; | |
| -- AlterTable | |
| -- If "is_email_verified" exists, rename it to keep data | |
| DO $$ | |
| BEGIN | |
| IF EXISTS ( | |
| SELECT 1 FROM information_schema.columns | |
| WHERE table_name = 'user' AND column_name = 'is_email_verified' | |
| ) THEN | |
| EXECUTE 'ALTER TABLE "user" RENAME COLUMN "is_email_verified" TO "email_verified"'; | |
| ELSE | |
| EXECUTE 'ALTER TABLE "user" ADD COLUMN "email_verified" BOOLEAN NOT NULL DEFAULT false'; | |
| END IF; | |
| END$$; | |
| ALTER TABLE "user" ADD COLUMN "name" TEXT; | |
| -- DropTable | |
| DROP TABLE "user_auth_provider"; |
| COPY packages/backend ./packages/backend | ||
|
|
||
| # Build the project | ||
| RUN pnpm turbo run @prisma-crud/backend#build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turbo command likely incorrect; use standard task/filter syntax.
pnpm turbo run @prisma-crud/backend#build is not a valid Turbo invocation. Run the build task and filter to the package.
Apply this diff:
-# Build the project
-RUN pnpm turbo run @prisma-crud/backend#build
+# Build the project for the backend package
+RUN pnpm turbo run build --filter=@prisma-crud/backend...📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN pnpm turbo run @prisma-crud/backend#build | |
| # Build the project for the backend package | |
| RUN pnpm turbo run build --filter=@prisma-crud/backend... |
🤖 Prompt for AI Agents
In examples/todo-with-auth0/Dockerfile around line 29, the Turbo invocation uses
an invalid package#task syntax; replace it with the standard Turbo task/filter
form by running the build task and filtering to the @prisma-crud/backend package
(e.g., use pnpm turbo run build --filter=@prisma-crud/backend).
| RUN pnpm --filter=@prisma-crud/backend deploy --prod /prod/backend --legacy | ||
|
|
||
| # Copy Prisma schemas into the deployed node_modules | ||
| RUN find . -path '*/node_modules/.pnpm/@prisma+client*/node_modules/.prisma/client' | xargs -r -I{} sh -c "rm -rf /prod/backend/{} && cp -R {} /prod/backend/{}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prisma Client copy path is wrong (copies into /prod/backend/packages/backend/... instead of /prod/backend/node_modules).
The current find ... | xargs cp -R {} /prod/backend/{} preserves the original path, creating an extra packages/backend/... inside /prod/backend, which the app won’t use. Copy the .prisma/client directory into the deployed node_modules path(s) under /prod/backend.
Apply this diff:
-# Copy Prisma schemas into the deployed node_modules
-RUN find . -path '*/node_modules/.pnpm/@prisma+client*/node_modules/.prisma/client' | xargs -r -I{} sh -c "rm -rf /prod/backend/{} && cp -R {} /prod/backend/{}"
+# Copy Prisma Client artifacts into the deployed node_modules
+# Find one source Prisma client (built in workspace) and copy into the deployed @prisma/client's node_modules
+RUN set -euo pipefail; \
+ src="$(find packages/backend -path '*/node_modules/.pnpm/@prisma+client*/node_modules/.prisma/client' -type d -print -quit)"; \
+ dest_base="$(find /prod/backend -path '*/node_modules/.pnpm/@prisma+client*/node_modules' -type d -print -quit)"; \
+ if [ -n "${src:-}" ] && [ -n "${dest_base:-}" ]; then \
+ rm -rf "${dest_base}/.prisma/client"; \
+ mkdir -p "${dest_base}/.prisma"; \
+ cp -R "${src}" "${dest_base}/.prisma/"; \
+ else \
+ echo "WARN: Prisma Client artifacts not found; skipping copy"; \
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN find . -path '*/node_modules/.pnpm/@prisma+client*/node_modules/.prisma/client' | xargs -r -I{} sh -c "rm -rf /prod/backend/{} && cp -R {} /prod/backend/{}" | |
| # Copy Prisma Client artifacts into the deployed node_modules | |
| # Find one source Prisma client (built in workspace) and copy into the deployed @prisma/client's node_modules | |
| RUN set -euo pipefail; \ | |
| src="$(find packages/backend -path '*/node_modules/.pnpm/@prisma+client*/node_modules/.prisma/client' -type d -print -quit)"; \ | |
| dest_base="$(find /prod/backend -path '*/node_modules/.pnpm/@prisma+client*/node_modules' -type d -print -quit)"; \ | |
| if [ -n "${src:-}" ] && [ -n "${dest_base:-}" ]; then \ | |
| rm -rf "${dest_base}/.prisma/client"; \ | |
| mkdir -p "${dest_base}/.prisma"; \ | |
| cp -R "${src}" "${dest_base}/.prisma/"; \ | |
| else \ | |
| echo "WARN: Prisma Client artifacts not found; skipping copy"; \ | |
| fi |
🤖 Prompt for AI Agents
In examples/todo-with-auth0/Dockerfile around line 35, the current find+cp
preserves the full source path and creates nested packages/backend/... under
/prod/backend; instead, create the corresponding node_modules destination under
/prod/backend and copy each found .prisma/client into that package’s
node_modules folder. Change the command so for each matched "{}" you compute the
package name (the directory under node_modules that contains .prisma/client),
mkdir -p /prod/backend/node_modules/<package-path> and then copy the
.prisma/client contents into
/prod/backend/node_modules/<package-path>/.prisma/client (ensuring parent dirs
are created) so the app finds the client at
/prod/backend/node_modules/.../.prisma/client rather than under an extra
packages/backend path.
| # misc | ||
| .DS_Store | ||
| .env.local | ||
| .env.development.local | ||
| .env.test.local | ||
| .env.production.local |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add generic env ignores and protect against accidental secret commits.
Right now, plain .env files are not ignored; only the *.local variants are. In examples, it’s easy to accidentally commit secrets. Consider broadening the patterns while still allowing checked-in templates.
Apply this diff to extend coverage:
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
+.env
+.env.*.local
+.env.development
+.env.test
+.env.production
+!.env.example
+!.env.*.example📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # misc | |
| .DS_Store | |
| .env.local | |
| .env.development.local | |
| .env.test.local | |
| .env.production.local | |
| # misc | |
| .DS_Store | |
| .env.local | |
| .env.development.local | |
| .env.test.local | |
| .env.production.local | |
| .env | |
| .env.*.local | |
| .env.development | |
| .env.test | |
| .env.production | |
| !.env.example | |
| !.env.*.example |
🤖 Prompt for AI Agents
In examples/todo-with-auth0/packages/admin/.gitignore around lines 33-38, the
file only ignores .env.*.local files which risks committing plain .env secrets;
update the ignore rules to add generic environment patterns (e.g., add entries
for .env and .env.*) while explicitly allowing checked-in templates (e.g., add
negated patterns like !.env.example and/or !.env.template) so real secret files
are ignored but example/template env files remain tracked.
| [ | ||
| { | ||
| "canonicalPath": "@/src/routes", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/auth", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/admin", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/admin/todos", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/admin/todos/todo-list", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/admin/accounts", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/admin/accounts/users", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/admin/accounts/users/user", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/todos", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/storage", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/accounts", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/accounts/users", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/routes/accounts/auth", | ||
| "pathRootName": "routes-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src/components", | ||
| "pathRootName": "components-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@", | ||
| "pathRootName": "package-root" | ||
| }, | ||
| { | ||
| "canonicalPath": "@/src", | ||
| "pathRootName": "src-root" | ||
| } | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add a lightweight validation check in CI for uniqueness and allowed groups
Guard against accidental duplicates or typos in pathRootName.
Run this script to validate shape, uniqueness, and allowed group names:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
FILE="examples/todo-with-auth0/packages/admin/.paths-metadata.json"
ALLOWED='["routes-root","components-root","package-root","src-root"]'
# 1) Ensure every item has required keys and allowed pathRootName
jq -e --argjson allowed "$ALLOWED" '
all(.[]; has("canonicalPath") and has("pathRootName")) and
all(.[]; .pathRootName as $g | any($allowed[]; . == $g))
' "$FILE" >/dev/null
# 2) Ensure canonicalPath values are unique
dups=$(jq -r '.[].canonicalPath' "$FILE" | sort | uniq -d)
if [[ -n "$dups" ]]; then
echo "Duplicate canonicalPath entries found:"
echo "$dups"
exit 1
fi
# 3) Optional: assert sorted order by canonicalPath for stable diffs
if ! diff -u <(jq -S '.|sort_by(.canonicalPath)' "$FILE") <(jq -S '.' "$FILE") >/dev/null; then
echo "Entries are not sorted by canonicalPath. Consider sorting to stabilize diffs."
exit 2
fi
echo "paths-metadata validation passed."Length of output: 1027
Enforce metadata validation and stable ordering in CI
The .paths-metadata.json file meets all key-presence, allowed group, and uniqueness checks, but its entries aren’t currently sorted by canonicalPath, which can lead to noisy diffs. Let’s add a CI job that:
- Validates each item has both
canonicalPathandpathRootName, and that everypathRootNameis one of the allowed groups. - Ensures no duplicate
canonicalPathvalues. - Verifies the array is sorted by
canonicalPathto stabilize diffs.
Suggested CI validation script (e.g. add as .github/workflows/validate-paths.yml step):
#!/usr/bin/env bash
set -euo pipefail
FILE="examples/todo-with-auth0/packages/admin/.paths-metadata.json"
ALLOWED='["routes-root","components-root","package-root","src-root"]'
# 1) Shape and allowed groups
jq -e --argjson allowed "$ALLOWED" '
all(.[]; has("canonicalPath") and has("pathRootName")) and
all(.[]; .pathRootName as $g | any($allowed[]; . == $g))
' "$FILE" >/dev/null
# 2) Uniqueness
dups=$(jq -r '.[].canonicalPath' "$FILE" | sort | uniq -d)
if [[ -n "$dups" ]]; then
echo "❌ Duplicate canonicalPath entries found:"
echo "$dups"
exit 1
fi
# 3) Stable ordering
# Fail if the file isn’t sorted by canonicalPath
jq -e '(. == sort_by(.canonicalPath))' "$FILE" \
|| { echo "❌ .paths-metadata.json must be sorted by canonicalPath"; exit 1; }
echo "✅ paths-metadata validation passed."Before merging, please:
- Sort the entries in
examples/todo-with-auth0/packages/admin/.paths-metadata.jsonbycanonicalPath. - Commit the sorted file.
- Hook up the above validation in your CI pipeline.
| For this project, you will need to set up environment variables that the application requires to function properly. One essential variable is `NPM_TOKEN`, which is used for accessing private npm packages. If you set this project up with the project creator, you should have already been asked for this token. Otherwise, to set this up: | ||
|
|
||
| 1. Create a `.env` file in the root directory of your project. | ||
| 2. Add the following line to the `.env` file: | ||
|
|
||
| ```bash | ||
| NPM_TOKEN=<your-npm-token> | ||
| ``` | ||
|
|
||
| You can obtain the NPM token from your engineering manager. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Harden token handling; add .npmrc guidance and avoid committing secrets.
Current instructions may lead newcomers to commit tokens. Recommend using .env.example and .gitignore, and documenting .npmrc usage with env substitution. Suggested edits:
-For this project, you will need to set up environment variables that the application requires to function properly. One essential variable is `NPM_TOKEN`, which is used for accessing private npm packages. If you set this project up with the project creator, you should have already been asked for this token. Otherwise, to set this up:
-
-1. Create a `.env` file in the root directory of your project.
-2. Add the following line to the `.env` file:
-
-```bash
-NPM_TOKEN=<your-npm-token>
-```
-
-You can obtain the NPM token from your engineering manager.
+For private packages, set an `NPM_TOKEN` in your local environment:
+
+1. Copy `.env.example` to `.env` and set:
+ ```
+ NPM_TOKEN=<your-npm-token>
+ ```
+2. Ensure `.env` is in `.gitignore` (do not commit secrets).
+3. Configure `.npmrc` to read the token at install time:
+ ```
+ //registry.npmjs.org/:_authToken=${NPM_TOKEN}
+ ```
+Obtain the token from your engineering manager.This prevents accidental secret leakage while keeping installs working.
🤖 Prompt for AI Agents
In examples/todo-with-auth0/README.md around lines 15 to 25, the current
guidance directly instructs users to create a .env with NPM_TOKEN which risks
committing secrets; update the README to (1) instruct users to copy .env.example
to .env and populate NPM_TOKEN, (2) explicitly tell them to ensure .env is
listed in .gitignore and never committed, and (3) document adding a .npmrc entry
that uses environment substitution (e.g.
//registry.npmjs.org/:_authToken=${NPM_TOKEN}) so installs consume the token at
install time — make these edits to the README and ensure .env.example exists in
the repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| import { userSessionPayload } from '../../schema/user-session-payload.object-type.js'; | ||
| import { userObjectType } from '../../schema/user.object-type.js'; | ||
| import { | ||
| authenticateUserWithEmailAndPassword, | ||
| changeUserPassword, | ||
| registerUserWithEmailAndPassword, | ||
| resetUserPassword, | ||
| } from '../services/user-password.service.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Switch relative imports to the configured @src alias
Guideline calls for absolute imports via tsconfig paths. Convert these to use @src for consistency and better refactors.
-import { userSessionPayload } from '../../schema/user-session-payload.object-type.js';
-import { userObjectType } from '../../schema/user.object-type.js';
+import { userSessionPayload } from '@src/modules/accounts/schema/user-session-payload.object-type.js';
+import { userObjectType } from '@src/modules/accounts/schema/user.object-type.js';
@@
-} from '../services/user-password.service.js';
+} from '@src/modules/accounts/password/services/user-password.service.js';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { userSessionPayload } from '../../schema/user-session-payload.object-type.js'; | |
| import { userObjectType } from '../../schema/user.object-type.js'; | |
| import { | |
| authenticateUserWithEmailAndPassword, | |
| changeUserPassword, | |
| registerUserWithEmailAndPassword, | |
| resetUserPassword, | |
| } from '../services/user-password.service.js'; | |
| import { userSessionPayload } from '@src/modules/accounts/schema/user-session-payload.object-type.js'; | |
| import { userObjectType } from '@src/modules/accounts/schema/user.object-type.js'; | |
| import { | |
| authenticateUserWithEmailAndPassword, | |
| changeUserPassword, | |
| registerUserWithEmailAndPassword, | |
| resetUserPassword, | |
| } from '@src/modules/accounts/password/services/user-password.service.js'; |
🤖 Prompt for AI Agents
In
examples/blog-with-auth/packages/backend/src/modules/accounts/password/schema/user-password.mutations.ts
(lines 3–10), change the relative imports to use the configured @src alias:
replace '../../schema/user-session-payload.object-type.js' and
'../../schema/user.object-type.js' with
'@src/modules/accounts/schema/user-session-payload.object-type.js' and
'@src/modules/accounts/schema/user.object-type.js' respectively, and replace
'../services/user-password.service.js' with
'@src/modules/accounts/password/services/user-password.service.js'; ensure the
new absolute paths match your tsconfig paths mapping and update any import file
extensions if your build expects .ts instead of .js so the project compiles.
| // check for password match | ||
| const isValid = await verifyPasswordHash( | ||
| userAccount.password ?? '', | ||
| password, | ||
| ); | ||
| if (!isValid) { | ||
| throw new BadRequestError('Invalid password', 'invalid-password'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden login verify when stored hash is missing or malformed
As above, verifying with an empty/undefined hash can throw. Guard and treat as invalid password.
- // check for password match
- const isValid = await verifyPasswordHash(
- userAccount.password ?? '',
- password,
- );
- if (!isValid) {
+ // check for password match
+ if (!userAccount.password) {
+ throw new BadRequestError('Invalid password', 'invalid-password');
+ }
+ let isValid = false;
+ try {
+ isValid = await verifyPasswordHash(userAccount.password, password);
+ } catch {
+ isValid = false;
+ }
+ if (!isValid) {
throw new BadRequestError('Invalid password', 'invalid-password');
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // check for password match | |
| const isValid = await verifyPasswordHash( | |
| userAccount.password ?? '', | |
| password, | |
| ); | |
| if (!isValid) { | |
| throw new BadRequestError('Invalid password', 'invalid-password'); | |
| } | |
| // check for password match | |
| if (!userAccount.password) { | |
| throw new BadRequestError('Invalid password', 'invalid-password'); | |
| } | |
| let isValid = false; | |
| try { | |
| isValid = await verifyPasswordHash(userAccount.password, password); | |
| } catch { | |
| isValid = false; | |
| } | |
| if (!isValid) { | |
| throw new BadRequestError('Invalid password', 'invalid-password'); | |
| } |
🤖 Prompt for AI Agents
examples/blog-with-auth/packages/backend/src/modules/accounts/password/services/user-password.service.ts
around lines 119 to 126: currently verifyPasswordHash is called even when the
stored hash is missing or malformed which can throw; first check that
userAccount.password is a non-empty string (and optionally matches expected hash
format) and if not immediately throw BadRequestError('Invalid
password','invalid-password'); then call verifyPasswordHash inside a try/catch
and if it throws or returns false, throw the same BadRequestError so malformed
hashes are treated as invalid passwords rather than crashing the login flow.
| // Get the user's account | ||
| const userAccount = await prisma.userAccount.findFirst({ | ||
| where: { | ||
| userId, | ||
| providerId: PROVIDER_ID, | ||
| }, | ||
| }); | ||
|
|
||
| // Verify current password | ||
| const isValid = await verifyPasswordHash( | ||
| userAccount?.password ?? '', | ||
| currentPassword, | ||
| ); | ||
| if (!isValid || !userAccount) { | ||
| throw new BadRequestError( | ||
| 'Current password is incorrect', | ||
| 'invalid-current-password', | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: verifying against a missing password can throw and bypass your intended error
If userAccount is null or has a null/empty password, verifyPasswordHash will throw (argon2 expects a valid hash). Perform a presence check first and treat verify errors as invalid credentials.
const userAccount = await prisma.userAccount.findFirst({
@@
});
- // Verify current password
- const isValid = await verifyPasswordHash(
- userAccount?.password ?? '',
- currentPassword,
- );
- if (!isValid || !userAccount) {
+ // Verify current password (guard against missing account/hash)
+ if (!userAccount || !userAccount.password) {
+ throw new BadRequestError(
+ 'Current password is incorrect',
+ 'invalid-current-password',
+ );
+ }
+ let isValid = false;
+ try {
+ isValid = await verifyPasswordHash(userAccount.password, currentPassword);
+ } catch {
+ isValid = false;
+ }
+ if (!isValid) {
throw new BadRequestError(
'Current password is incorrect',
'invalid-current-password',
);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get the user's account | |
| const userAccount = await prisma.userAccount.findFirst({ | |
| where: { | |
| userId, | |
| providerId: PROVIDER_ID, | |
| }, | |
| }); | |
| // Verify current password | |
| const isValid = await verifyPasswordHash( | |
| userAccount?.password ?? '', | |
| currentPassword, | |
| ); | |
| if (!isValid || !userAccount) { | |
| throw new BadRequestError( | |
| 'Current password is incorrect', | |
| 'invalid-current-password', | |
| ); | |
| } | |
| // Get the user's account | |
| const userAccount = await prisma.userAccount.findFirst({ | |
| where: { | |
| userId, | |
| providerId: PROVIDER_ID, | |
| }, | |
| }); | |
| // Verify current password (guard against missing account/hash) | |
| if (!userAccount || !userAccount.password) { | |
| throw new BadRequestError( | |
| 'Current password is incorrect', | |
| 'invalid-current-password', | |
| ); | |
| } | |
| let isValid = false; | |
| try { | |
| isValid = await verifyPasswordHash(userAccount.password, currentPassword); | |
| } catch { | |
| isValid = false; | |
| } | |
| if (!isValid) { | |
| throw new BadRequestError( | |
| 'Current password is incorrect', | |
| 'invalid-current-password', | |
| ); | |
| } |
🤖 Prompt for AI Agents
In
examples/blog-with-auth/packages/backend/src/modules/accounts/password/services/user-password.service.ts
around lines 164 to 182, the code calls verifyPasswordHash before ensuring
userAccount and its password exist which can cause argon2 to throw and bypass
your intended error handling; first check that userAccount is non-null and that
userAccount.password is a non-empty string and immediately throw the
BadRequestError if missing, then call verifyPasswordHash inside a try/catch and
treat any verification error (or a false result) as an invalid current password
by throwing the same BadRequestError with the 'invalid-current-password' code.
| @@ -0,0 +1,38 @@ | |||
| import { requestContext } from '@fastify/request-context'; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Rename file to kebab-case.
auth.plugin.ts → auth-plugin.ts to follow the repository’s naming guideline.
🏁 Script executed:
#!/bin/bash
# Find all imports that reference auth.plugin
rg -nP "auth\.plugin\.js"Length of output: 289
🏁 Script executed:
#!/bin/bash
# Locate the auth.plugin.ts file
find . -type f -name 'auth.plugin.ts'
# Find all references to auth.plugin (regardless of extension)
rg -nP "auth\\.plugin"Length of output: 1262
Rename auth.plugin.ts to kebab-case and update all references
This change requires renaming the file and adjusting every import and mapping that points to it:
• Rename
– examples/blog-with-auth/packages/backend/src/modules/accounts/plugins/auth.plugin.ts → auth-plugin.ts
– examples/todo-with-auth0/packages/backend/src/modules/accounts/auth/plugins/auth.plugin.ts → auth-plugin.ts
– packages/fastify-generators/src/generators/auth/auth-plugin/templates/module/plugins/auth.plugin.ts → auth-plugin.ts
• Update imports in examples:
– examples/blog-with-auth/packages/backend/src/modules/accounts/index.ts: change
import { authPlugin } from './plugins/auth.plugin.js' → import { authPlugin } from './plugins/auth-plugin.js'
– examples/todo-with-auth0/packages/backend/src/modules/accounts/auth/index.ts: same update
• Update file-id-map.json entries:
– examples/blog-with-auth/packages/backend/baseplate/file-id-map.json
– examples/todo-with-auth0/packages/backend/baseplate/file-id-map.json
Replace any ".../auth.plugin.ts" or ".../auth.plugin" keys/values with "auth-plugin.ts"/"auth-plugin"
• Update generator config in extractor.json:
– packages/fastify-generators/src/generators/auth/auth-plugin/extractor.json
Change "module/plugins/auth.plugin.ts" to "module/plugins/auth-plugin.ts"
Please ensure every path and import is updated so the renaming doesn’t break builds or imports.
🤖 Prompt for AI Agents
In
examples/blog-with-auth/packages/backend/src/modules/accounts/plugins/auth.plugin.ts
around line 1, rename this file to auth-plugin.ts and update all corresponding
imports and mappings: change imports in
examples/blog-with-auth/packages/backend/src/modules/accounts/index.ts and
examples/todo-with-auth0/packages/backend/src/modules/accounts/auth/index.ts
from './plugins/auth.plugin.js' to './plugins/auth-plugin.js'; update the other
two file paths (examples/todo-with-auth0/.../auth/plugin and
packages/fastify-generators/.../templates/module/plugins/) to auth-plugin.ts;
update file-id-map.json entries in both examples' baseplate directories to
replace any "auth.plugin.ts"/"auth.plugin" keys/values with
"auth-plugin.ts"/"auth-plugin"; and update
packages/fastify-generators/src/generators/auth/auth-plugin/extractor.json to
use "module/plugins/auth-plugin.ts" so all imports and mappings remain
consistent.
| await prisma.$transaction([ | ||
| prisma.userRole.deleteMany({ | ||
| where: { userId, role: { notIn: validRoles } }, | ||
| }), | ||
| prisma.userRole.createMany({ | ||
| data: validRoles.map((role) => ({ userId, role })), | ||
| skipDuplicates: true, | ||
| }), | ||
| ]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard createMany when no roles are provided to avoid Prisma error.
createMany({ data: [] }) will throw; build the transaction array conditionally.
- await prisma.$transaction([
- prisma.userRole.deleteMany({
- where: { userId, role: { notIn: validRoles } },
- }),
- prisma.userRole.createMany({
- data: validRoles.map((role) => ({ userId, role })),
- skipDuplicates: true,
- }),
- ]);
+ await prisma.$transaction([
+ prisma.userRole.deleteMany({
+ where: { userId, role: { notIn: validRoles } },
+ }),
+ ...(validRoles.length > 0
+ ? [
+ prisma.userRole.createMany({
+ data: validRoles.map((role) => ({ userId, role })),
+ skipDuplicates: true,
+ }),
+ ]
+ : []),
+ ]);🤖 Prompt for AI Agents
In
examples/blog-with-auth/packages/backend/src/modules/accounts/services/user-roles.service.ts
around lines 34 to 42, guard the prisma.userRole.createMany call when validRoles
is empty to avoid Prisma throwing on createMany({ data: [] }); build the
transaction array conditionally by always including the deleteMany operation and
only pushing the createMany operation if validRoles.length > 0 (or similar
truthy check), then call prisma.$transaction with that array so createMany is
omitted when there are no roles to insert.
| rootBuilderProto.fieldWithInputPayload = function fieldWithInputPayload({ | ||
| args, | ||
| input, | ||
| payload, | ||
| ...fieldOptions | ||
| }) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add explicit this type and return type for strict TS
Under strict/noImplicitThis, this is any here. Also, per guidelines, include return types on top-level functions. This improves editor help and prevents regressions.
-rootBuilderProto.fieldWithInputPayload = function fieldWithInputPayload({
- args,
- input,
- payload,
- ...fieldOptions
-}) {
+rootBuilderProto.fieldWithInputPayload = function fieldWithInputPayload(
+ this: RootFieldBuilder<SchemaTypes, unknown>,
+ {
+ args,
+ input,
+ payload,
+ ...fieldOptions
+ },
+): FieldRef<SchemaTypes, unknown, 'Object'> {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rootBuilderProto.fieldWithInputPayload = function fieldWithInputPayload({ | |
| args, | |
| input, | |
| payload, | |
| ...fieldOptions | |
| }) { | |
| rootBuilderProto.fieldWithInputPayload = function fieldWithInputPayload( | |
| this: RootFieldBuilder<SchemaTypes, unknown>, | |
| { | |
| args, | |
| input, | |
| payload, | |
| ...fieldOptions | |
| }, | |
| ): FieldRef<SchemaTypes, unknown, 'Object'> { |
| if (originalError instanceof HttpError) { | ||
| return new GraphQLError(originalError.message, { | ||
| ...sharedOptions, | ||
| extensions: { | ||
| ...error.extensions, | ||
| code: originalError.code, | ||
| statusCode: originalError.statusCode, | ||
| extraData: originalError.extraData, | ||
| reqId: requestContext.get('reqInfo')?.id, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Double-check request id population in requestContext
reqId is read from requestContext.get('reqInfo')?.id. Ensure the request-context plugin is registered and reqInfo is set for GraphQL requests; otherwise reqId will be undefined in GraphQL error extensions.
Also applies to: 100-111
🏁 Script executed:
#!/bin/bash
# Verify request-context registration and reqInfo assignment
rg -nP -C3 --type=ts '\bregister\(\s*requestContext\b'
rg -nP -C3 --type=ts "\bset\(['\"]reqInfo['\"],"
rg -nP -C3 --type=ts "\.reqInfo\b"Length of output: 11670
Register the requestContext plugin before your GraphQL setup
I didn’t find any call to fastify.register(requestContext) in the codebase, so the reqInfo context never gets populated for GraphQL requests—meaning requestContext.get('reqInfo')?.id will always be undefined. To fix this:
• In your main application bootstrap file (e.g. examples/blog-with-auth/packages/backend/src/app.ts or src/index.ts), add:
import requestContext from './plugins/request-context';
fastify.register(requestContext);• Ensure this registration happens before you register the GraphQL plugin (in src/plugins/graphql/index.ts), for example:
fastify
.register(requestContext) // <— here
.register(graphqlPlugin, {...});• After doing so, requestContext.get('reqInfo')?.id will reliably return the request ID in your GraphQL error extensions.
🤖 Prompt for AI Agents
In examples/blog-with-auth/packages/backend/src/plugins/graphql/index.ts around
lines 85 to 95, the GraphQL error builder reads
requestContext.get('reqInfo')?.id but I couldn't find requestContext being
registered anywhere, so reqInfo will always be undefined; fix by registering the
requestContext plugin in your main bootstrap (e.g.
examples/blog-with-auth/packages/backend/src/app.ts or src/index.ts) before
registering the GraphQL plugin—import the requestContext plugin and call
fastify.register(requestContext) (or equivalent) prior to
fastify.register(graphqlPlugin, ...) so requestContext.get('reqInfo') is
populated for GraphQL requests.
| export const useSentry: () => Plugin = () => ({ | ||
| onExecute: ({ setExecuteFn, executeFn }) => { | ||
| setExecuteFn((options) => { | ||
| const { opName, opType, transactionName, document } = | ||
| getOperationInfo(options); | ||
|
|
||
| Sentry.getCurrentScope() | ||
| .setTransactionName(transactionName) | ||
| .setContext('graphql', { | ||
| operation_name: opName, | ||
| operation_type: opType, | ||
| source: document, | ||
| }); | ||
| return Sentry.startSpan( | ||
| { | ||
| op: 'graphql.execute', | ||
| name: transactionName, | ||
| forceTransaction: true, | ||
| attributes: { | ||
| [AttributeNames.OPERATION_NAME]: options.operationName as string, | ||
| [AttributeNames.SOURCE]: document, | ||
| }, | ||
| }, | ||
| () => executeFn(options) as Promise<void>, | ||
| ); | ||
| }); | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t upcast execute result to Promise; preserve ExecutionResult and avoid scope chaining
- The callback passed to Sentry.startSpan is asserted as Promise, which erases the actual GraphQL ExecutionResult type and can cause type mismatches.
- Prefer using the resolved opName attribute rather than possibly-undefined options.operationName.
- Avoid chaining setTransactionName().setContext(...) unless the SDK guarantees fluent returns on these methods.
Apply this diff:
export const useSentry: () => Plugin = () => ({
onExecute: ({ setExecuteFn, executeFn }) => {
setExecuteFn((options) => {
const { opName, opType, transactionName, document } =
getOperationInfo(options);
- Sentry.getCurrentScope()
- .setTransactionName(transactionName)
- .setContext('graphql', {
- operation_name: opName,
- operation_type: opType,
- source: document,
- });
+ const scope = Sentry.getCurrentScope();
+ scope.setTransactionName(transactionName);
+ scope.setContext('graphql', {
+ operation_name: opName,
+ operation_type: opType,
+ source: document,
+ });
return Sentry.startSpan(
{
op: 'graphql.execute',
name: transactionName,
forceTransaction: true,
attributes: {
- [AttributeNames.OPERATION_NAME]: options.operationName as string,
+ [AttributeNames.OPERATION_NAME]: opName,
[AttributeNames.SOURCE]: document,
},
},
- () => executeFn(options) as Promise<void>,
+ () => executeFn(options),
);
});
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const useSentry: () => Plugin = () => ({ | |
| onExecute: ({ setExecuteFn, executeFn }) => { | |
| setExecuteFn((options) => { | |
| const { opName, opType, transactionName, document } = | |
| getOperationInfo(options); | |
| Sentry.getCurrentScope() | |
| .setTransactionName(transactionName) | |
| .setContext('graphql', { | |
| operation_name: opName, | |
| operation_type: opType, | |
| source: document, | |
| }); | |
| return Sentry.startSpan( | |
| { | |
| op: 'graphql.execute', | |
| name: transactionName, | |
| forceTransaction: true, | |
| attributes: { | |
| [AttributeNames.OPERATION_NAME]: options.operationName as string, | |
| [AttributeNames.SOURCE]: document, | |
| }, | |
| }, | |
| () => executeFn(options) as Promise<void>, | |
| ); | |
| }); | |
| }, | |
| }); | |
| export const useSentry: () => Plugin = () => ({ | |
| onExecute: ({ setExecuteFn, executeFn }) => { | |
| setExecuteFn((options) => { | |
| const { opName, opType, transactionName, document } = | |
| getOperationInfo(options); | |
| const scope = Sentry.getCurrentScope(); | |
| scope.setTransactionName(transactionName); | |
| scope.setContext('graphql', { | |
| operation_name: opName, | |
| operation_type: opType, | |
| source: document, | |
| }); | |
| return Sentry.startSpan( | |
| { | |
| op: 'graphql.execute', | |
| name: transactionName, | |
| forceTransaction: true, | |
| attributes: { | |
| [AttributeNames.OPERATION_NAME]: opName, | |
| [AttributeNames.SOURCE]: document, | |
| }, | |
| }, | |
| () => executeFn(options), | |
| ); | |
| }); | |
| }, | |
| }); |
🤖 Prompt for AI Agents
In examples/blog-with-auth/packages/backend/src/plugins/graphql/use-sentry.ts
around lines 37 to 64, the code forcibly upcasts the execute result to
Promise<void>, uses options.operationName (which may be undefined) and chains
Sentry scope methods; change the Sentry usage to call setTransactionName(...)
and setContext(...) in separate statements (not chained), set the attribute
OPERATION_NAME to the resolved opName variable (fallback for undefined
options.operationName), and remove the Promise<void> cast so the callback passed
to Sentry.startSpan returns the actual ExecutionResult/Promise<ExecutionResult>
from executeFn(options) to preserve types and avoid erasing the GraphQL result
type.
| async function startWorkers(): Promise<void> { | ||
| // Start workers for all registered queues | ||
| const startPromises = QUEUE_REGISTRY.map(async (queue) => { | ||
| try { | ||
| await queue.work(); | ||
| } catch (error: unknown) { | ||
| logError(error, { source: 'run-workers', queueName: queue.name }); | ||
| } | ||
| }); | ||
|
|
||
| await Promise.all(startPromises); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t mask worker start failures — surface them and fail fast.
Catching per-queue errors inside the mapper ensures Promise.all never rejects, so main() logs success even if all workers failed. Aggregate results and throw when any queue fails to start.
-async function startWorkers(): Promise<void> {
- // Start workers for all registered queues
- const startPromises = QUEUE_REGISTRY.map(async (queue) => {
- try {
- await queue.work();
- } catch (error: unknown) {
- logError(error, { source: 'run-workers', queueName: queue.name });
- }
- });
-
- await Promise.all(startPromises);
-}
+async function startWorkers(): Promise<void> {
+ const results = await Promise.allSettled(
+ QUEUE_REGISTRY.map(async (queue) => {
+ try {
+ await queue.work();
+ return { queueName: queue.name, ok: true as const };
+ } catch (error: unknown) {
+ logError(error, { source: 'run-workers', queueName: queue.name });
+ return { queueName: queue.name, ok: false as const, error };
+ }
+ }),
+ );
+
+ const failedQueues = results
+ .filter(
+ (r): r is PromiseFulfilledResult<{ ok: false; queueName: string }> =>
+ r.status === 'fulfilled' && !r.value.ok,
+ )
+ .map((r) => r.value.queueName);
+
+ if (failedQueues.length > 0) {
+ throw new Error(
+ `Failed to start workers for queues: ${failedQueues.join(', ')}`,
+ );
+ }
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function startWorkers(): Promise<void> { | |
| // Start workers for all registered queues | |
| const startPromises = QUEUE_REGISTRY.map(async (queue) => { | |
| try { | |
| await queue.work(); | |
| } catch (error: unknown) { | |
| logError(error, { source: 'run-workers', queueName: queue.name }); | |
| } | |
| }); | |
| await Promise.all(startPromises); | |
| } | |
| async function startWorkers(): Promise<void> { | |
| const results = await Promise.allSettled( | |
| QUEUE_REGISTRY.map(async (queue) => { | |
| try { | |
| await queue.work(); | |
| return { queueName: queue.name, ok: true as const }; | |
| } catch (error: unknown) { | |
| logError(error, { source: 'run-workers', queueName: queue.name }); | |
| return { queueName: queue.name, ok: false as const, error }; | |
| } | |
| }), | |
| ); | |
| const failedQueues = results | |
| .filter( | |
| (r): r is PromiseFulfilledResult<{ ok: false; queueName: string }> => | |
| r.status === 'fulfilled' && !r.value.ok, | |
| ) | |
| .map((r) => r.value.queueName); | |
| if (failedQueues.length > 0) { | |
| throw new Error( | |
| `Failed to start workers for queues: ${failedQueues.join(', ')}`, | |
| ); | |
| } | |
| } |
🤖 Prompt for AI Agents
In examples/blog-with-auth/packages/backend/src/scripts/run-workers.ts around
lines 25 to 36, the per-queue try/catch inside the mapper masks start failures
so Promise.all never rejects and main() reports success even if workers failed;
change to let Promise.all propagate rejections or use Promise.allSettled and
detect failures: start each queue without swallowing errors, collect results
with Promise.allSettled(), inspect for any rejected entries, log their errors
(with queueName), and if any failed throw an aggregate Error (or rethrow the
first error) so the process fails fast.
| /* TPL_LOGGER_ACTIONS:START */ | ||
| context.errorId = logErrorToSentry(error, context); | ||
| /* TPL_LOGGER_ACTIONS:END */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Harden the Sentry path: never throw from the logger.
External sinks can fail. Ensure errors in logErrorToSentry don’t break the error path.
- /* TPL_LOGGER_ACTIONS:START */
- context.errorId = logErrorToSentry(error, context);
- /* TPL_LOGGER_ACTIONS:END */
+ /* TPL_LOGGER_ACTIONS:START */
+ try {
+ context.errorId = logErrorToSentry(error, context);
+ } catch (sentErr) {
+ logger.warn({ msg: 'Failed to log to Sentry', err: sentErr });
+ }
+ /* TPL_LOGGER_ACTIONS:END */📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /* TPL_LOGGER_ACTIONS:START */ | |
| context.errorId = logErrorToSentry(error, context); | |
| /* TPL_LOGGER_ACTIONS:END */ | |
| /* TPL_LOGGER_ACTIONS:START */ | |
| try { | |
| context.errorId = logErrorToSentry(error, context); | |
| } catch (sentErr) { | |
| logger.warn({ msg: 'Failed to log to Sentry', err: sentErr }); | |
| } | |
| /* TPL_LOGGER_ACTIONS:END */ |
🤖 Prompt for AI Agents
In examples/blog-with-auth/packages/backend/src/services/error-logger.ts around
lines 22 to 24, the call to logErrorToSentry(error, context) can throw and break
the error handling path; wrap the call in a try/catch so any exceptions from the
Sentry sink are swallowed (or captured to a safe fallback) and do not propagate,
assign a safe fallback to context.errorId when Sentry fails, and optionally log
the Sentry failure to console/logger without re-throwing.
✅ Actions performedReviews paused. |
Summary by CodeRabbit
New Features
Documentation
Tests
Chores