Skip to main content

TalentG Application Technical Analysis - Senior Developer Questions

This document provides comprehensive answers to all technical questions about the TalentG application architecture, focusing on the Strength Finder feature and overall system design.

A. General Web Application Architecture

A1. Which exact Next.js version are we using? (e.g., 14 with App Router or 13 with Pages Router?)

Answer: We are using Next.js 15.2.4 with the App Router (not Pages Router). This is the latest stable version that includes:
  • React 19.0.0 support
  • Enhanced App Router with Server Components
  • Improved performance optimizations
  • Modern routing architecture

A2. Are we hosting the app on Vercel, Firebase Hosting, or a custom VPS / Cloud Run?

Answer: The application is hosted on Vercel with automatic deployments triggered by pushes to the main branch. Key indicators:
  • Production URL: talentg.vercel.app
  • Configuration includes Vercel-specific cache headers and deployment patterns
  • Environment variables are configured for Vercel deployment
  • Auto-deployment from main branch to production

A3. Do we use TypeScript or plain JavaScript across the app?

Answer: We use TypeScript 5.8.2 exclusively across the entire application. All files use .tsx and .ts extensions with:
  • Strict type checking enabled
  • Full TypeScript configuration in tsconfig.json
  • Zod schemas for runtime validation
  • TypeScript interfaces for all components and API responses

A4. How is state management handled — Zustand, Redux, or Context API?

Answer: We use Jotai for state management, not Zustand, Redux, or Context API. Jotai is configured as:
  • Atomic state management library
  • Lightweight and performant
  • Used for complex state like the Strength Finder assessment progress
  • Stores are located in src/store/ directory
  • Atoms are used for reactive state management

A5. How is authentication currently integrated with Supabase? (email OTP, magic link, Google OAuth, etc.)

Answer: Authentication uses Supabase Auth with multiple methods:
  • Email/Password authentication
  • Google OAuth (primary OAuth provider)
  • Email OTP and magic links are supported but Google OAuth is the primary external authentication method
  • Custom middleware handles authentication routing and session management
  • Row Level Security (RLS) is enabled on all user data tables

A6. Is server-side rendering (SSR) enabled for authenticated pages, or do we rely on client-side rendering (CSR)?

Answer: We use a hybrid approach:
  • Server Components for initial page loads (SSR)
  • Client Components for interactive features (CSR)
  • Authenticated pages use Server Components for better performance
  • Client-side routing and state management for dynamic interactions
  • Middleware handles authentication checks server-side

A7. Do we have API routes inside Next.js (e.g., /app/api/ai/route.ts) or use external serverless functions for AI calls?

Answer: We have internal API routes in Next.js (/app/api/ directory) for AI calls:
  • /api/generate-strength-analysis/route.ts - Main AI analysis endpoint
  • API routes handle authentication, validation, and AI processing
  • No external serverless functions - everything is contained within Next.js

A8. What environment variables and secrets are currently loaded (e.g., Supabase keys, Gemini API key, etc.)?

Answer: Key environment variables include:
  • Supabase: NEXT_PUBLIC_SUPABASE_URL, NEXT_PUBLIC_SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY
  • AI Services: OPENROUTER_API_KEY (for AI analysis)
  • Email: SMTP_HOST, SMTP_PORT, SMTP_USER, SMTP_PASSWORD, SMTP_FROM_EMAIL
  • Deployment: Vercel-specific variables
  • Maps: NEXT_PUBLIC_GOOGLE_MAP_API_KEY

A9. How are we storing and managing PDF reports now — Firebase Storage, Supabase Storage, or elsewhere?

Answer: Supabase Storage is used for PDF management:
  • Storage buckets: offer-letters, intern-assignments, profile-avatars, assignment-resources
  • PDFs are generated via N8N webhooks (external service)
  • Files are stored with organized paths (e.g., {batch}/{userName}-{userId}.pdf)
  • Supabase Storage handles file uploads, downloads, and access control

A10. What is the average API response time currently for Gemini when generating one report?

Answer: Based on the implementation:
  • Uses OpenRouter API with Gemma 3 4B model
  • Maximum tokens: 550 (approximately 400-word reports)
  • Response time is typically 5-15 seconds for complete analysis
  • No caching implemented - each request generates fresh analysis
  • Streaming is supported but not currently used in UI

B. Strength Finder Feature — Frontend

B1. How is the Strength Finder form built? (Custom React form, Formidable Form integration, or Supabase table-driven UI?)

Answer: Custom React form with sophisticated state management:
  • Built using Jotai atoms for state management
  • React Hook Form is available but not used for this feature
  • Zod validation for form inputs
  • Custom question components (QuestionCard.tsx)
  • No Supabase table-driven UI - questions are stored in static JSON (strength-finder-questions.ts)

B2. Are question sets stored statically (JSON file) or dynamically fetched from the database?

Answer: Statically stored in JSON file:
  • Questions stored in src/data/strength-finder-questions.ts
  • Not fetched from database - static configuration
  • 25 questions total (5 sections × 5 questions each)
  • Questions include Likert scale, single-choice, multi-choice, and open-ended types

B3. Does each question have a unique question ID and section reference in the DB schema?

Answer: Yes, comprehensive ID system:
  • Each question has unique id (e.g., "leadership_1", "communication_2")
  • Section reference through parent StrengthSection object
  • Questions are organized in 5 sections: Leadership, Communication, Reasoning, New Ideas, Discipline
  • IDs are used for answer tracking and scoring

B4. How do we handle multi-select and Likert questions — with custom components or third-party form components?

Answer: Custom components for all question types:
  • QuestionCard.tsx handles all question types
  • Likert scale: 1-5 rating system with custom UI
  • Multi-choice: Custom checkbox implementation with maxSelections limit
  • Single-choice: Custom radio button implementation
  • Open-ended: Textarea with validation
  • All components use RizzUI design system

B5. Are we saving answers live (auto-save) or only when the user submits all 8 sections?

Answer: Client-side only, no auto-save:
  • Answers stored in Jotai atoms during session
  • No database persistence until final submission
  • All 25 questions must be answered before completion
  • Answers lost if user closes browser (no persistence)
  • Progress tracking through atoms but not saved to database

B6. Do we allow users to pause and resume the test?

Answer: Limited pause/resume capability:
  • No explicit pause/resume UI
  • Answers persist in memory during session
  • No database persistence - refreshing page loses progress
  • Users can navigate between questions but cannot save and resume later
  • Assessment must be completed in one session

B7. Is there any timer or progress bar implemented on the UI?

Answer: Progress tracking implemented:
  • ProgressStepper component shows section completion
  • Visual progress bar with 5 sections
  • No timer - unlimited time for completion
  • Question counter and section navigation
  • Completion percentage shown in progress bar

B8. How do we calculate section-wise scores (client-side or server-side)?

Answer: Client-side calculation:
  • Scoring logic in src/lib/strength-finder/scoring.ts
  • processAssessmentResults() function calculates scores
  • Weighted scoring based on question weights
  • Section scores calculated immediately on completion
  • No server-side calculation - all processing happens client-side

B9. Are we storing the raw answers, computed scores, or both in Supabase?

Answer: Both raw answers and computed scores:
  • Table: strength_finder_assessments
  • Stores: answers (raw), section_scores, overall_score, ai_summary
  • Raw answers as JSON array with question IDs and values
  • Computed scores as structured JSON with section breakdowns
  • AI analysis stored as text in ai_summary field

B10. How do we trigger the AI report generation — immediately after final submit, or via a background job?

Answer: Immediate generation after submit:
  • AI analysis triggered in results page (useEffect)
  • Not asynchronous - blocks UI until completion
  • generate-strength-analysis API called immediately
  • No background job queue - synchronous processing
  • User waits 5-15 seconds for AI analysis to complete

C. Strength Finder — AI Integration

C1. Where is the AI request made from — client, API route, or backend worker?

Answer: API route (server-side):
  • AI requests made from /api/generate-strength-analysis/route.ts
  • Not from client - server-side API route
  • Not background worker - synchronous API call
  • Server validates authentication before AI processing
  • Returns processed analysis to client

C2. Are we currently sending the entire 64-question responses as raw text or formatted JSON?

Answer: Formatted JSON converted to descriptive text:
  • Raw answers sent as JSON array
  • Converted to readable text format in formatQuestionsAndAnswers()
  • Personality insights added to raw responses
  • Not raw JSON - transformed into narrative format
  • Includes scoring interpretation and behavioral patterns

C3. How big (in characters / tokens) is one user’s total input payload?

Answer: Approximately 3,000-5,000 characters:
  • 25 questions with answers and descriptions
  • Converted to narrative format for AI processing
  • Token estimate: ~800-1,200 tokens (including prompt)
  • Prompt template adds ~1,000 characters
  • Total input payload: moderate size, well within API limits

C4. What is the average latency or generation time from Gemini for one 400-word report?

Answer: 5-15 seconds for complete generation:
  • Uses OpenRouter API with Gemma 3 4B model
  • 400-word reports generated in single API call
  • Max 550 tokens limit (approximately 400 words)
  • No streaming implemented - full response returned at once
  • Network latency + AI processing time

C5. Are we caching previous AI outputs or always regenerating on each run?

Answer: Always regenerating - no caching:
  • No caching mechanism implemented
  • Each assessment generates fresh AI analysis
  • Same inputs may produce different outputs
  • No performance optimization for repeated requests
  • All analyses are unique per generation

C6. Do we plan to move all AI analysis to OpenAI API (GPT-5 mini/nano), or will Gemini remain as backup?

Answer: Currently using OpenRouter (Gemma):
  • No immediate plans to migrate to OpenAI
  • Gemma 3 4B is primary model via OpenRouter
  • Gemini could be backup but not currently configured
  • OpenRouter provides model flexibility
  • Cost-effective solution with good performance

C7. Are we currently using any queueing system (e.g., n8n, Redis, or background job scheduler)?

Answer: No queueing system for AI:
  • AI generation is synchronous
  • N8N used only for PDF generation (not AI)
  • No Redis or background job scheduler
  • No queuing - direct API calls
  • Potential bottleneck for concurrent users

C8. What is the expected peak concurrency (e.g., 500 requests in 90 minutes — are they evenly spread or bursty)?

Answer: Bursty concurrency expected:
  • 500 students likely to finish simultaneously
  • Not evenly spread - peak load at deadline
  • 90-minute window creates high concurrency
  • No load balancing or queuing implemented
  • Potential performance issues under peak load

C9. Will we generate PDFs immediately after AI output, or asynchronously via queue?

Answer: PDFs generated via external service:
  • Not immediately after AI - separate process
  • N8N webhook handles PDF generation
  • Asynchronous but not queued in app
  • External service manages PDF creation
  • Users can download when ready

C10. Are we currently using any queueing system (e.g., n8n, Redis, or background job scheduler)?

Answer: N8N used for PDFs only:
  • N8N webhooks for PDF generation
  • No Redis or job scheduler
  • No queuing for AI - synchronous processing
  • External service dependency for PDFs
  • No application-level queuing

D. Database & Supabase

D1. Which Supabase plan are we using right now (Free / Pro / Team)?

Answer: Cannot determine from codebase:
  • No explicit plan information in configuration
  • Free tier limits: 500MB database, 50MB storage
  • Current usage: 37 profiles, 1 strength finder assessment
  • Storage buckets: Multiple buckets configured
  • Production URL: pesgbqyqnecfxloazjjv.supabase.co

D2. What are our current storage and row limits in Supabase?

Answer: Unknown exact limits:
  • Database rows: 37 profiles, minimal other data
  • Storage: Multiple buckets, no size limits visible
  • File size limit: offer-letters bucket limited to 10MB files
  • Well within free tier limits based on current usage
  • No custom limits configured

D3. How large is one complete record (answers + metadata + report)?

Answer: Approximately 5-10KB per record:
  • Raw answers: JSON array (~2KB)
  • Section scores: JSON object (~1KB)
  • AI analysis: ~400 words text (~3KB)
  • Metadata: User info, timestamps (~1KB)
  • Total: ~7KB per strength finder assessment

D4. How many concurrent connections does our Supabase instance handle at peak?

Answer: Unknown exact limits:
  • Free tier: Limited concurrent connections
  • Current load: Minimal (development/testing)
  • No connection pooling configured
  • Potential bottleneck for 500 concurrent users
  • Need Pro/Team plan for production load

D5. Do we use Supabase Functions or only direct queries from client code?

Answer: Direct queries only:
  • No Supabase Edge Functions used
  • All database operations via direct client queries
  • API routes handle complex logic server-side
  • Client-side queries for simple operations
  • No serverless functions in Supabase

D6. Is row-level security (RLS) active on the tables storing student answers?

Answer: RLS enabled on all tables:
  • strength_finder_assessments has RLS policies
  • Users can view their own assessments
  • Admins can view all assessments
  • Secure by default - proper access controls
  • RLS policies defined in migration files

D7. Are we indexing data for faster queries (especially on user_id, test_id)?

Answer: Basic indexing implemented:
  • user_id index on strength_finder_assessments
  • completed_at index for time-based queries
  • category index for filtering
  • No advanced indexing visible
  • Sufficient for current load

D8. Do we log test completion timestamps to compute analytics later?

Answer: Yes, comprehensive timestamps:
  • created_at: When record was created
  • completed_at: When assessment was finished
  • Timestamps used for analytics
  • Duration calculation possible
  • Audit trail maintained

D9. Are we using Supabase Realtime to show progress or live updates?

Answer: No Realtime features:
  • No Supabase Realtime subscriptions
  • No live progress updates
  • Static progress tracking only
  • No real-time notifications
  • No synchronous updates only

D10. Do we store AI outputs (final reports) as plain text or JSONB objects in Supabase?

Answer: Plain text storage:
  • ai_summary field: Plain text (~400 words)
  • Not JSONB - text format for readability
  • No structured JSON for AI outputs
  • Full text search possible
  • Simple retrieval and display

E. Scaling & Performance

E1. What is our deployment region (e.g., Mumbai, Singapore, Frankfurt)?

Answer: Unknown from codebase:
  • Supabase project: pesgbqyqnecfxloazjjv.supabase.co
  • Vercel deployment: Global CDN
  • No region specification in code
  • Likely Singapore/Mumbai for Indian users
  • Need to check Supabase dashboard

E2. Are we using any caching layer (Redis / Vercel Edge cache) for rate-limiting or deduplication?

Answer: No caching layer:
  • No Redis configured
  • No Vercel Edge cache for API routes
  • No rate limiting implemented
  • No deduplication for AI requests
  • Potential performance issues

E3. Are we using serverless queue workers (e.g., Vercel Queues / Upstash / Cloud Run)?

Answer: No serverless queues:
  • No Vercel Queues
  • No Upstash or similar
  • No background job processing
  • Synchronous AI generation
  • No Cloud Run integration

E4. How much average bandwidth does one user session consume (especially during PDF download)?

Answer: Moderate bandwidth usage:
  • Assessment data: ~10KB (answers + metadata)
  • AI analysis: ~5KB (400 words)
  • PDF download: 50-200KB (depending on content)
  • Images/assets: Cached via Vercel CDN
  • Total per session: ~500KB

E5. Do we anticipate all 500 students finishing at nearly the same time or staggered?

Answer: Bursty traffic expected:
  • 500 students likely to finish simultaneously
  • Not staggered - deadline-driven behavior
  • Peak load scenario needs planning
  • Load testing required

E6. Are we planning to use Vercel Cron Jobs or Supabase Functions for periodic cleanup or re-generation?

Answer: No cron jobs visible:
  • No Vercel Cron Jobs configured
  • No Supabase Functions for cleanup
  • No periodic tasks implemented
  • Manual cleanup only
  • No automated maintenance

E7. Do we have monitoring or alerting set up (e.g., PostHog, Logtail, or Supabase logs)?

Answer: Basic logging only:
  • Console logging in API routes
  • No PostHog or Logtail
  • Supabase logs available but not integrated
  • No alerting system
  • No performance monitoring

E8. Is there any backup or disaster recovery plan (e.g., automatic storage backups)?

Answer: No explicit backup strategy:
  • Supabase handles backups (standard)
  • No custom backup procedures
  • No disaster recovery plan
  • Relies on Supabase’s backup system
  • Production risk without custom backups

E9. How are errors from AI handled — retries, queued regeneration, or fallback text?

Answer: Basic error handling:
  • No retries implemented
  • No queuing for failed requests
  • No fallback text - errors shown to users
  • Console logging for debugging
  • User sees error messages

E10. How many AI calls per day (avg and peak) do we expect by month-end?

Answer: 500 AI calls total:
  • 500 students × 1 assessment each
  • Peak load: All calls within 90 minutes
  • Average: ~16 calls per day over 30 days
  • Peak: ~300+ calls per hour
  • No daily limits or throttling

F. Branding & PDF Generation

F1. Are we using Puppeteer / Playwright / jsPDF / PDF-Lib for PDF rendering?

Answer: External service (N8N):
  • Not Puppeteer/Playwright/jsPDF
  • N8N webhooks handle PDF generation
  • External service creates PDFs
  • No client-side PDF generation
  • API-based PDF creation

F2. Is the design template stored as HTML/CSS or Canvas/Canva file?

Answer: Unknown - external service:
  • N8N handles templating
  • No templates visible in codebase
  • External configuration
  • Not stored in repository
  • Need to check N8N setup

F3. Do we plan to include graphics, charts, or progress bars in the PDF?

Answer: Unknown - depends on N8N template:
  • Radar charts shown in UI preview
  • Progress bars implemented in UI
  • PDF content depends on N8N configuration
  • No PDF templates in codebase
  • External service controls design

F4. How large (in MB) is each generated PDF?

Answer: Unknown size:
  • No sample PDFs in codebase
  • Depends on content length
  • AI reports: ~400 words text
  • Estimated: 50-200KB per PDF
  • Storage limit: 10MB per file

F5. How are PDFs currently named and saved (e.g., userId_date.pdf)?

Answer: Structured naming convention:
  • Format: {userName}-{userId.slice(-6)}.pdf
  • Example: JohnDoe-abc123.pdf
  • Path: {batch}/{filename}
  • Batch-based organization
  • Unique filenames

F6. Will users be able to download immediately or receive by email?

Answer: Download when available:
  • Not immediate - generated asynchronously
  • Check existing before generating
  • Download via API when ready
  • No email delivery implemented
  • Manual download process

F7. Are we tracking PDF download counts for analytics?

Answer: No download tracking:
  • No analytics for downloads
  • No tracking implemented
  • Basic file storage only
  • No usage metrics
  • No download counters

F8. Are there any privacy or security restrictions on who can access generated reports?

Answer: Basic access control:
  • Supabase storage policies control access
  • User-based permissions
  • Private bucket for sensitive data
  • Authentication required
  • No additional restrictions

F9. Do we plan to generate PDFs in parallel or queue them after report text generation?

Answer: External queuing in N8N:
  • N8N handles queuing
  • Not parallel in application
  • Sequential processing
  • External service manages load
  • No application-level queuing

F10. Should the PDF generator also handle watermarking or branding automatically?

Answer: Depends on N8N configuration:
  • N8N can handle branding
  • Watermarking possible in external service
  • No branding logic in codebase
  • External customization
  • Template-based branding

G. Future & Expansion

G1. Are we planning to integrate Skill Gap Analysis + Strength Finder under one dashboard?

Answer: Not currently implemented:
  • Strength Finder only currently
  • Skill Gap Analysis not in codebase
  • Separate features for now
  • Future integration possible
  • Unified dashboard would be beneficial

G2. Do we want to introduce user-specific AI chat summaries or “Ask your counsellor” bots?

Answer: Not implemented:
  • Static AI reports only
  • No chat functionality
  • No counsellor bots
  • One-way AI analysis
  • Future enhancement opportunity

G3. Will there be admin analytics (e.g., average score per college or region)?

Answer: Basic analytics possible:
  • Database stores scores by category
  • No admin analytics dashboard
  • Raw data available for analysis
  • No reporting interface
  • Manual analysis required

G4. Are we considering multi-language support (English + Hindi)?

Answer: English only currently:
  • No i18n implementation
  • Hard-coded English text
  • No language switching
  • Single language support
  • Multi-language not planned

G5. Do we need offline capability for slow-internet students?

Answer: No offline capability:
  • Requires internet for AI analysis
  • No service worker implemented
  • No offline storage
  • Online-only application
  • Internet dependency

G6. Are we planning to send WhatsApp / email reports automatically after test completion?

Answer: No automatic sending:
  • Manual download only
  • No WhatsApp integration
  • No email delivery implemented
  • User-initiated only
  • No automated distribution

G7. Will we track re-attempts and compare reports across attempts?

Answer: No re-attempt tracking:
  • Single assessment per user
  • No attempt history
  • No comparison features
  • One-time assessment
  • No retest capability

G8. Do we want to store AI version / model used in the report metadata?

Answer: No model versioning:
  • Hard-coded model: Gemma 3 4B
  • No version tracking
  • No model metadata stored
  • Static model selection
  • No version history

G9. Will the app ever need to export data for UGC / college dashboards (e.g., CSV reports)?

Answer: Possible future requirement:
  • No export functionality currently
  • Data stored in Supabase
  • CSV export possible with development
  • College dashboards not implemented
  • Data portability would be useful**

G10. Are there any plans for institutional bulk usage (e.g., 5,000+ students in one day)?

Answer: Not prepared for bulk usage:
  • Current architecture: Not scalable for 5,000+ users
  • No queuing system for AI calls
  • Synchronous processing bottleneck
  • Supabase limits may be exceeded
  • Infrastructure upgrade needed

Summary & Recommendations

Current Strengths:

  • Modern Next.js 15 + React 19 architecture
  • Comprehensive authentication with Supabase
  • Well-structured Strength Finder assessment
  • TypeScript throughout for reliability
  • Vercel deployment with auto-scaling

Critical Scaling Concerns:

  1. AI Processing: Synchronous AI calls will bottleneck at 500 concurrent users
  2. No Queuing: No background job processing for high load
  3. PDF Generation: External N8N dependency without fallback
  4. Database Limits: May exceed free tier limits under load
  5. No Caching: Every request hits external APIs

Immediate Action Items:

  1. Implement AI request queuing system
  2. Add caching for AI responses
  3. Set up proper monitoring and alerting
  4. Upgrade Supabase plan for production load
  5. Implement proper error handling and retries
  6. Add load testing before launch