30 KiB
Architecture: Multi-Signer Extension + Docker Deployment
Project: teressa-copeland-homes Milestone: v1.2 — Multi-Signer Support + Deployment Hardening Researched: 2026-04-03 Confidence: HIGH — based on direct codebase inspection + official Docker/Next.js documentation
Summary
The existing system is a clean, well-factored single-signer flow. Every document has exactly one signing token, one recipient, and one atomic "mark used" operation. Multi-signer requires four categories of change:
- Schema: Tag fields to signers, expand signingTokens to identify whose token it is, replace the single-recipient model with a per-signer recipients structure.
- Completion detection: Replace the single
usedAt→status = 'Signed'trigger with a "all tokens claimed" check after each signing submission. - Final PDF assembly: The per-signer merge model (each signer embeds into the same prepared PDF, sequentially via advisory lock) accumulates signatures into
signedFilePath. A completion pass fires after the last signer claims their token. - Migration: Existing signed documents must remain intact — achieved by treating the absence of
signerEmailon a field as "legacy single-signer" (same as the existingtypecoalescing pattern already used ingetFieldType).
Docker deployment has a distinct failure mode from local dev: env vars that exist in .env.local are absent in the Docker container unless explicitly provided at runtime. The email-sending failure in production Docker is caused by CONTACT_SMTP_HOST, CONTACT_EMAIL_USER, and CONTACT_EMAIL_PASS never reaching the container. The fix is env_file injection at docker compose up time, not Docker Secrets (which mount as files, not env vars, and require app-side entrypoint shim code that adds no security benefit for a single-server deployment).
Part 1: Multi-Signer Schema Changes
1. SignatureFieldData JSONB — add optional signerEmail
Current shape (from src/lib/db/schema.ts):
interface SignatureFieldData {
id: string;
page: number;
x: number;
y: number;
width: number;
height: number;
type?: SignatureFieldType; // optional — v1.0 had no type
}
New shape:
interface SignatureFieldData {
id: string;
page: number;
x: number;
y: number;
width: number;
height: number;
type?: SignatureFieldType;
signerEmail?: string; // NEW — optional; absent = legacy single-signer or agent-owned field
}
Backward compatibility: The signerEmail field is optional. Existing documents stored in signature_fields JSONB have no signerEmail. The signing page already filters fields via isClientVisibleField(). A new getSignerEmail(field, fallbackEmail) helper mirrors getFieldType() and returns field.signerEmail ?? fallbackEmail — where fallbackEmail is the document's legacy single-recipient email. This keeps existing signed documents working without a data backfill.
No SQL migration needed for the JSONB column itself — it is already jsonb, schema-less at the DB level.
2. signingTokens table — add signerEmail column
Current:
CREATE TABLE signing_tokens (
jti text PRIMARY KEY,
document_id text NOT NULL REFERENCES documents(id) ON DELETE CASCADE,
created_at timestamp DEFAULT now() NOT NULL,
expires_at timestamp NOT NULL,
used_at timestamp
);
New column to add:
ALTER TABLE signing_tokens ADD COLUMN signer_email text;
Drizzle schema change:
export const signingTokens = pgTable('signing_tokens', {
jti: text('jti').primaryKey(),
documentId: text('document_id').notNull()
.references(() => documents.id, { onDelete: 'cascade' }),
signerEmail: text('signer_email'), // NEW — null for legacy tokens
createdAt: timestamp('created_at').defaultNow().notNull(),
expiresAt: timestamp('expires_at').notNull(),
usedAt: timestamp('used_at'),
});
Why not store field IDs in the token? Field filtering should happen server-side by matching field.signerEmail === tokenRow.signerEmail. Storing field IDs in the token creates a second source of truth and complicates migration. The signing GET endpoint already fetches the document's signatureFields and filters them — adding a signerEmail comparison is a one-line change.
Backward compatibility: signer_email is nullable. Existing tokens have null. The signing endpoint uses tokenRow.signerEmail to filter fields; null falls back to isClientVisibleField() (current behavior).
3. documents table — add signers JSONB column
Current problem: assignedClientId is a single-value text column. Signers in multi-signer are identified by email, not necessarily by a clients row (requirement: "signers may not be in clients table"). The current emailAddresses JSONB column holds email strings but lacks per-signer identity (name, signing status, token linkage).
Decision: add a signers JSONB column; leave assignedClientId in place for legacy
ALTER TABLE documents ADD COLUMN signers jsonb;
New TypeScript type:
export interface DocumentSigner {
email: string;
name?: string; // display name for email greeting, optional
tokenJti?: string; // populated at send time — links token back to signer record
signedAt?: string; // ISO timestamp — populated when their token is claimed
}
Drizzle:
// In documents table:
signers: jsonb('signers').$type<DocumentSigner[]>(),
Why JSONB array instead of a new table? A document_signers join table would be cleanest long-term, but for a solo-agent app with document-level granularity and no need to query "all documents this email signed across the system", JSONB avoids an extra join on every document fetch. The tokenJti field on each signer record gives the bidirectional link without the join table.
Why keep assignedClientId? It is still used by the send route to resolve the clients row for client.email and client.name. For multi-signer, the agent provides emails directly in the signers array. The two flows coexist:
- Legacy:
assignedClientIdis set,signersis null → single-signer behavior - New:
signersis set (non-null, non-empty) → multi-signer behavior
The send route checks if (doc.signers?.length) { /* multi-signer path */ } else { /* legacy path */ }.
emailAddresses column: Currently stores [client.email, ...ccAddresses]. In multi-signer this is superseded by signers[].email. The column can remain and be ignored for new documents, or populated with all signer emails for audit reading consistency.
4. auditEvents — new event types
Current enum values:
document_prepared | email_sent | link_opened | document_viewed | signature_submitted | pdf_hash_computed
New values to add:
| Event Type | When Fired | Metadata |
|---|---|---|
signer_email_sent |
Per-signer email sent (supplements email_sent for multi-signer) |
{ signerEmail, tokenJti } |
signer_signed |
Per-signer token claimed | { signerEmail } |
document_completed |
All signers have signed — triggers final notification | { signerCount, mergedFilePath } |
Backward compatibility: Postgres enums cannot have values removed, only added. The existing email_sent and signature_submitted events stay in the enum and continue to be fired for legacy single-signer documents. New multi-signer documents fire the new, more specific events. Adding values to a Postgres enum requires raw SQL that Drizzle cannot auto-generate:
ALTER TYPE audit_event_type ADD VALUE 'signer_email_sent';
ALTER TYPE audit_event_type ADD VALUE 'signer_signed';
ALTER TYPE audit_event_type ADD VALUE 'document_completed';
Important: In Postgres 12+, ALTER TYPE ... ADD VALUE can run inside a transaction. In Postgres < 12 it cannot. Write migration with -- statement-breakpoint between each ALTER to prevent Drizzle from wrapping them in a single transaction.
5. documents table — completion tracking columns unchanged
Multi-signer "completion" is no longer a single event. The existing columns serve all needs:
| Column | Current use | Multi-signer use |
|---|---|---|
status |
Draft → Sent → Viewed → Signed | "Signed" now means ALL signers complete. Status transitions to Signed only after document_completed fires. |
signedAt |
Timestamp of the single signing | Timestamp of completion (last signer claimed) — same semantic, set later. |
signedFilePath |
Path to the merged signed PDF | Accumulator path — updated by each signer as they embed; final value = completed PDF. |
pdfHash |
SHA-256 of signed PDF | Same — hash of the final merged PDF. |
Per-signer completion is tracked in signers[].signedAt (the JSONB array). No new columns required.
Part 2: Multi-Signer Data Flow
Field Tagging (Agent UI)
Agent places field on PDF canvas
↓
FieldPlacer shows signer email selector (from doc.signers[])
↓
SignatureFieldData.signerEmail = "buyer@example.com"
↓
PUT /api/documents/[id]/fields persists to signatureFields JSONB
Chicken-and-egg consideration: The agent must know the signer list before tagging fields. Resolution: the PreparePanel collects signer emails first (a new multi-signer entry UI replaces the single email textarea). These are saved to documents.signers via PUT /api/documents/[id]/signers. The FieldPlacer palette then offers a signer email selector when placing a client-visible field.
Unassigned client fields: If signerEmail is absent on a client-visible field in a multi-signer document, behavior must be defined. Recommended: block sending until all client-signature and initials fields have a signerEmail. The UI shows a warning. Text, checkbox, and date fields do not require a signer tag (they are embedded at prepare time and never shown to signers).
Token Creation (Send)
Agent clicks "Prepare and Send"
↓
POST /api/documents/[id]/prepare
- embeds agent signatures, text fills → preparedFilePath
- reads doc.signers[] to confirm signer list exists
↓
POST /api/documents/[id]/send
- if doc.signers?.length: multi-signer path
Promise.all(doc.signers.map(signer => {
createSigningToken(documentId, signer.email)
→ INSERT signing_tokens (jti, document_id, signer_email, expires_at)
sendSigningRequestEmail({ to: signer.email, signingUrl: /sign/[token] })
logAuditEvent('signer_email_sent', { signerEmail: signer.email, tokenJti: jti })
}))
update doc.signers[*].tokenJti
set documents.status = 'Sent'
- else: legacy single-signer path (unchanged)
Signing Page (Per Signer)
Signer opens /sign/[token]
↓
GET /api/sign/[token]
- verifySigningToken(token) → { documentId, jti }
- fetch tokenRow (jti) → tokenRow.signerEmail
- fetch doc.signatureFields
- if tokenRow.signerEmail:
filter fields where field.signerEmail === tokenRow.signerEmail AND isClientVisibleField
else:
filter fields with isClientVisibleField (legacy path — unchanged)
- return { status: 'pending', document: { ...doc, signatureFields: filteredFields } }
↓
Signer sees only their fields; draws signatures; submits
↓
POST /api/sign/[token]
1. Verify JWT
2. Atomic claim: UPDATE signing_tokens SET used_at = NOW() WHERE jti = ? AND used_at IS NULL
→ 0 rows = 409 already-signed
3. Acquire Postgres advisory lock on document ID (prevents concurrent PDF writes)
4. Read current accumulatorPath = doc.signedFilePath ?? doc.preparedFilePath
5. Embed this signer's signatures into accumulatorPath → write to new path
(clients/{id}/{uuid}_partial.pdf, updated with atomic rename)
6. Update doc.signedFilePath = new path
7. Update doc.signers[signerEmail].signedAt = now
8. Release advisory lock
9. Check completion: COUNT(signing_tokens WHERE document_id = ? AND used_at IS NOT NULL)
vs COUNT(signing_tokens WHERE document_id = ?)
10a. Not all signed: logAuditEvent('signer_signed'); return 200
10b. All signed (completion):
- Compute pdfHash of final signedFilePath
- UPDATE documents SET status='Signed', signedAt=now, pdfHash=hash
- logAuditEvent('signer_signed')
- logAuditEvent('document_completed', { signerCount, mergedFilePath })
- sendAgentNotificationEmail (all signed)
- sendAllSignersCompletionEmail (each signer receives final PDF link)
- return 200
Advisory Lock Implementation
// Within the signing POST, wrap the PDF write in an advisory lock:
await db.execute(sql`SELECT pg_advisory_xact_lock(hashtext(${documentId}))`);
// All subsequent DB operations in this transaction hold the lock.
// Lock released automatically when transaction commits or rolls back.
Drizzle's db.execute(sql...) supports raw SQL. pg_advisory_xact_lock is a session-level transaction lock — safe for this use case.
Part 3: Migration Strategy
Existing signed documents — no action required
The signerEmail field is absent from all existing signatureFields JSONB. For existing tokens, signer_email = null. The signing endpoint's null path falls through to isClientVisibleField() — identical to current behavior. Existing documents never enter multi-signer code paths.
Migration file (single file, order matters)
Write as drizzle/0010_multi_signer.sql:
-- 1. Expand signing_tokens
ALTER TABLE "signing_tokens" ADD COLUMN "signer_email" text;
-- statement-breakpoint
-- 2. Add signers JSONB to documents
ALTER TABLE "documents" ADD COLUMN "signers" jsonb;
-- statement-breakpoint
-- 3. Expand audit event enum
-- Must be outside a transaction in Postgres < 12 — use statement-breakpoint
ALTER TYPE "audit_event_type" ADD VALUE 'signer_email_sent';
-- statement-breakpoint
ALTER TYPE "audit_event_type" ADD VALUE 'signer_signed';
-- statement-breakpoint
ALTER TYPE "audit_event_type" ADD VALUE 'document_completed';
No backfill required. Existing rows have null for new columns, which is the correct legacy sentinel value at every call site.
TypeScript changes after migration
- Add
signerEmail?: stringtoSignatureFieldDatainterface - Add
DocumentSignerinterface - Add
signerscolumn todocumentsDrizzle table definition - Add
signerEmailtosigningTokensDrizzle table definition - Add three values to
auditEventTypeEnumarray in schema - Add
getSignerEmail(field, fallback)helper function
All changes are additive. No existing function signatures break.
Part 4: Multi-Signer Build Order
Each step is independently deployable. Deploy schema migration first, then backend changes, then UI.
Step 1: DB migration (0010_multi_signer.sql)
→ System: DB ready. App unchanged. No user impact.
Step 2: Schema TypeScript + token layer
- Add DocumentSigner type, signerEmail to SignatureFieldData
- Update signingTokens and documents Drizzle definitions
- Update createSigningToken(documentId, signerEmail?)
- Add auditEventTypeEnum new values
→ System: Token creation accepts signer email. All existing behavior unchanged.
Step 3: Signing GET endpoint — field filtering
- Read tokenRow.signerEmail
- Filter signatureFields by signerEmail (null → legacy)
→ System: Signing page shows correct fields per signer. Legacy tokens unaffected.
Step 4: Signing POST endpoint — accumulator + completion
- Add advisory lock
- Add accumulator path logic
- Add completion check
- Add document_completed event + notifications
→ System: Multi-signer signing flow complete end-to-end. Single-signer legacy unchanged.
Step 5: Send route — per-signer token loop
- Detect doc.signers vs legacy
- Loop: create token + send email per signer
- Log signer_email_sent per signer
→ System: New documents get per-signer tokens. Old documents still use legacy path.
Step 6: New endpoint — PUT /api/documents/[id]/signers
- Validate email array
- Update documents.signers JSONB
→ System: Agent can set signer list from UI.
Step 7: UI — PreparePanel signer list
- Replace single email textarea with name+email rows (add/remove)
- Call PUT /api/documents/[id]/signers on change
- Warn if client-visible fields lack signerEmail
→ System: Agent can define signers before placing fields.
Step 8: UI — FieldPlacer signer tagging
- Add signerEmail selector per client-visible field
- Color-code placed fields by signer
- Pass signerEmail through persistFields
→ System: Full multi-signer field placement.
Step 9: Email — completion notifications
- sendAllSignersCompletionEmail in signing-mailer.tsx
- Update sendAgentNotificationEmail for completion context
→ System: All parties notified and receive final PDF link.
Step 10: End-to-end verification
- Test with two signers on a real Utah form
- Verify field isolation, sequential PDF accumulation, final hash
Part 5: Multi-Signer Components — New vs Modified
Modified
| Component | File | Nature of Change |
|---|---|---|
SignatureFieldData interface |
src/lib/db/schema.ts |
Add signerEmail?: string |
auditEventTypeEnum |
src/lib/db/schema.ts |
Add 3 new values |
signingTokens Drizzle table |
src/lib/db/schema.ts |
Add signerEmail column |
documents Drizzle table |
src/lib/db/schema.ts |
Add signers column |
createSigningToken() |
src/lib/signing/token.ts |
Add signerEmail? param; INSERT includes it |
GET /api/sign/[token] |
src/app/api/sign/[token]/route.ts |
Signer-aware field filtering (null path = legacy) |
POST /api/sign/[token] |
src/app/api/sign/[token]/route.ts |
Accumulator PDF logic, advisory lock, completion check, completion notifications |
POST /api/documents/[id]/send |
src/app/api/documents/[id]/send/route.ts |
Per-signer token + email loop; legacy path preserved |
PreparePanel |
src/app/portal/(protected)/documents/[docId]/_components/PreparePanel.tsx |
Multi-signer list entry UI (replaces single textarea) |
FieldPlacer |
src/app/portal/(protected)/documents/[docId]/_components/FieldPlacer.tsx |
Signer email selector on field place; per-signer color coding |
signing-mailer.tsx |
src/lib/signing/signing-mailer.tsx |
Add sendAllSignersCompletionEmail function |
New
| Component | File | Purpose |
|---|---|---|
DocumentSigner interface |
src/lib/db/schema.ts |
Shape of documents.signers[] JSONB entries |
getSignerEmail() helper |
src/lib/db/schema.ts |
Returns field.signerEmail ?? fallback; mirrors getFieldType() pattern |
PUT /api/documents/[id]/signers |
src/app/api/documents/[id]/signers/route.ts |
Save/update signer list on the document |
| Migration file | drizzle/0010_multi_signer.sql |
All DB schema changes in one file |
Not Changed
| Component | Reason |
|---|---|
embedSignatureInPdf() |
Works on any path; accumulator pattern reuses it as-is |
verifySigningToken() |
JWT payload unchanged; signerEmail is DB-only, not a JWT claim |
logAuditEvent() |
Accepts any enum value; new values are additive |
isClientVisibleField() |
Logic unchanged; still used for legacy null-signer tokens |
GET /api/sign/[token]/pdf |
Serves prepared PDF; no signer-specific logic needed |
clients table |
Signers are email-identified, not FK-linked |
preparePdf() / prepare endpoint |
Unchanged; accumulation happens during signing, not preparation |
Part 6: Docker Compose — Secrets and Environment Variables
The Core Problem
The existing email failure in Docker production is a runtime env var injection gap: CONTACT_SMTP_HOST, CONTACT_EMAIL_USER, CONTACT_EMAIL_PASS (and CONTACT_SMTP_PORT) exist in .env.local during development but are never passed to the Docker container at runtime. The nodemailer transporter in src/lib/signing/signing-mailer.tsx reads these directly from process.env. When they are undefined, nodemailer.createTransport() silently creates a transporter with no credentials, and sendMail() fails at send time.
Why Not Docker Secrets (file-based)?
Docker Compose's secrets: block mounts secret values as files at /run/secrets/<name> inside the container. This is designed for Docker Swarm and serves a specific security model (encrypted transport in Swarm, no env var exposure in docker inspect). It requires the application to read from the filesystem instead of process.env. Bridging this to process.env requires an entrypoint shell script that reads each /run/secrets/ file and exports its value before starting node server.js.
For this deployment (single VPS, secrets are SSH-managed files on the server, not Swarm), the file-based secrets approach adds complexity with no meaningful security benefit over a properly permissioned .env.production file. Use env_file injection, not secrets:.
Decision: Confirmed approach is env_file: with a server-side .env.production file (not committed to git, permissions 600, owned by deploy user).
Environment Variable Classification
Not all env vars are equal. Next.js has two distinct categories:
| Category | Prefix | When Evaluated | Who Can Read | Example |
|---|---|---|---|---|
| Server-only runtime | (none) | At request time, via process.env |
API routes, Server Components, Route Handlers | DATABASE_URL, CONTACT_SMTP_HOST, OPENAI_API_KEY |
| Public build-time | NEXT_PUBLIC_ |
At next build — inlined into JS bundle |
Client-side code | (none in this app) |
This app has no NEXT_PUBLIC_ variables. All secrets are server-only and evaluated at request time. They do not need to be present at docker build time — only at docker run / docker compose up time. This is the ideal case: the same Docker image can run in any environment by providing different env_file values.
Verified: Next.js App Router server-side code (process.env.X in API routes, Server Components) reads env vars at request time when the route is dynamically rendered. Source: Next.js deploying docs, vercel/next.js docker-compose example.
Required Secrets for Production
Derived from .env.local inspection — all server-only, none NEXT_PUBLIC_:
DATABASE_URL — Neon PostgreSQL connection string
SIGNING_JWT_SECRET — JWT signing key for signing tokens
AUTH_SECRET — Next Auth / Iron Session secret
AGENT_EMAIL — Agent login email
AGENT_PASSWORD — Agent login password hash seed
BLOB_READ_WRITE_TOKEN — Vercel Blob storage token
CONTACT_EMAIL_USER — SMTP username (fixes email delivery bug)
CONTACT_EMAIL_PASS — SMTP password (fixes email delivery bug)
CONTACT_SMTP_HOST — SMTP host (fixes email delivery bug)
CONTACT_SMTP_PORT — SMTP port (fixes email delivery bug)
OPENAI_API_KEY — GPT-4.1 for AI field placement
SKYSLOPE_* and URE_* credentials are script-only (seed/scrape scripts), not needed in the production container.
Compose File Structure
docker-compose.yml (production):
services:
app:
build:
context: ./teressa-copeland-homes
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "3000:3000"
env_file:
- .env.production # server-side secrets — NOT committed to git
environment:
NODE_ENV: production
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider",
"http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
.env.production (on server, never committed):
DATABASE_URL=postgres://...
SIGNING_JWT_SECRET=...
AUTH_SECRET=...
AGENT_EMAIL=teressa@...
AGENT_PASSWORD=...
BLOB_READ_WRITE_TOKEN=...
CONTACT_EMAIL_USER=...
CONTACT_EMAIL_PASS=...
CONTACT_SMTP_HOST=smtp.fastmail.com
CONTACT_SMTP_PORT=465
OPENAI_API_KEY=sk-...
.gitignore must include:
.env.production
.env.production.local
.env.local
Dockerfile — next.config.ts Change Required
The current next.config.ts does not set output: 'standalone'. The standalone output is required for the official multi-stage Docker pattern — it produces a self-contained server.js with only necessary files, yielding a ~60-80% smaller production image compared to copying all of node_modules.
Change needed in next.config.ts:
const nextConfig: NextConfig = {
output: 'standalone', // ADD THIS
transpilePackages: ['react-pdf', 'pdfjs-dist'],
serverExternalPackages: ['@napi-rs/canvas'],
};
Caution: @napi-rs/canvas is a native addon. Verify the production base image (Debian slim recommended, not Alpine) has the required glibc version. Alpine uses musl libc which is incompatible with pre-built @napi-rs/canvas binaries. The official canary Dockerfile uses node:24-slim (Debian).
Dockerfile — Recommended Pattern
Three-stage build based on the official Next.js canary example:
ARG NODE_VERSION=20-slim
# Stage 1: Install dependencies
FROM node:${NODE_VERSION} AS dependencies
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci --no-audit --no-fund
# Stage 2: Build
FROM node:${NODE_VERSION} AS builder
WORKDIR /app
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
ENV NODE_ENV=production
# No NEXT_PUBLIC_ vars needed — all secrets are server-only runtime vars
RUN npm run build
# Stage 3: Production runner
FROM node:${NODE_VERSION} AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
RUN mkdir .next && chown node:node .next
COPY --from=builder --chown=node:node /app/public ./public
COPY --from=builder --chown=node:node /app/.next/standalone ./
COPY --from=builder --chown=node:node /app/.next/static ./.next/static
USER node
EXPOSE 3000
CMD ["node", "server.js"]
No ARG/ENV lines for secrets in the Dockerfile. Secrets are never baked into the image. They arrive exclusively at runtime via env_file: in the Compose file.
Health Check Endpoint
Required for the Compose healthcheck: to work. Create at src/app/api/health/route.ts:
export async function GET() {
return Response.json({ status: 'ok', uptime: process.uptime() });
}
wget is available in the node:20-slim (Debian) image. curl is not installed by default in slim images; wget is more reliable without a separate RUN apt-get install curl.
.dockerignore
Prevents secrets and large dirs from entering the build context:
node_modules
.next
.env*
uploads/
seeds/
scripts/
drizzle/
*.png
*.pdf
Deployment Procedure (First Time)
1. SSH into VPS
2. git clone (or git pull) the repo
3. Create .env.production with all secrets (chmod 600 .env.production)
4. Run database migration: docker compose run --rm app npm run db:migrate
(or run migration against Neon directly before starting)
5. docker compose build
6. docker compose up -d
7. docker compose logs -f app (verify email sends on first signing test)
Common Pitfall: db:migrate in Container
Drizzle db:migrate reads DATABASE_URL from env. In the container, this is provided via env_file:. Run migration as a one-off:
docker compose run --rm app node -e "
const { drizzle } = require('drizzle-orm/neon-http');
const { migrate } = require('drizzle-orm/neon-http/migrator');
// ...
"
Or more practically: run npx drizzle-kit migrate from the host with DATABASE_URL set in the shell, pointing at the production Neon database, before deploying the new container. This avoids needing drizzle-kit inside the production image.
Part 7: Multi-Signer Key Risks and Mitigations
| Risk | Severity | Mitigation |
|---|---|---|
| Two signers submit simultaneously, both read same PDF | HIGH | Postgres advisory lock pg_advisory_xact_lock(hashtext(documentId)) on signing POST |
| Accumulator path tracking lost between signers | MEDIUM | documents.signedFilePath always tracks current accumulator; null = use preparedFilePath |
| Agent sends before all fields tagged to signers | MEDIUM | PreparePanel validates: block send if any client-visible field has no signerEmail in a multi-signer document |
ALTER TYPE ADD VALUE in Postgres < 12 fails in transaction |
MEDIUM | Use -- statement-breakpoint between each ALTER; verify Postgres version |
| Resending to a signer (token expired) | LOW | Issue new token via a resend endpoint; existing tokens remain valid |
| Legacy documents break | LOW | signerEmail optional at every layer; null path = unchanged behavior throughout |
Part 8: Docker Key Risks and Mitigations
| Risk | Severity | Mitigation |
|---|---|---|
.env.production committed to git |
HIGH | .gitignore entry required; never add to repo |
@napi-rs/canvas binary incompatible with Alpine |
HIGH | Use node:20-slim (Debian), not node:20-alpine |
| Secrets baked into Docker image layer | MEDIUM | Zero ARG/ENV secret lines in Dockerfile; all secrets via env_file: at compose up |
standalone output omits required files |
MEDIUM | Test locally with output: 'standalone' before pushing; watch for missing static assets |
Health check uses curl (not in slim image) |
LOW | Use wget in healthcheck command — present in Debian slim |
| Migration runs against wrong DB | LOW | Run drizzle-kit migrate from host against Neon URL before container start; never inside production image |
Sources
- Docker Compose Secrets — Official Docs — HIGH confidence
- Next.js with-docker official example — Dockerfile (canary) — HIGH confidence
- Next.js with-docker-compose official example (canary) — HIGH confidence
- Next.js env var classification (runtime vs build-time) — HIGH confidence
- Direct codebase inspection:
src/lib/signing/signing-mailer.tsx,src/lib/db/schema.ts,.env.localkey names,next.config.ts— HIGH confidence
Architecture research for: teressa-copeland-homes v1.2 Researched: 2026-04-03