Job Search Ecosystem
Two Next.js apps that hand off to each other. Paste a job description, get a tailored resume and a keyword score, then track the application with a Kanban board. No retyping.
How it works
Drop any JD into Resume Tailor, no account needed. Parse company, role, location, and salary in one go.
resume-tailorOpenAI (default) or Anthropic API scores your resume against keywords and generates a tailored version. Stream live results with a keyword match score.
openai / anthropicClick "Save to Job Tracker", then a signed JWT deep link pre-fills your Job Application Tracker with company, role, salary, and keyword score. No retyping.
job-trackerThe tools
Each tool is independently useful. Together, they cover the full job search funnel.
Paste a JD. Get a tailored resume in seconds.
Every application. Every status. One board.
Architecture
Every decision in Rouse was made for a reason. Here are the three that shaped the system most.
All parsing, scoring, and tailoring happens in a single LLM call with a tagged output format. The client splits on the closing tag, parses the JSON immediately, then streams the generated resume text. Eliminates cascading latency, reduces cost, and simplifies error handling.
Resume Tailor hands off to Job Tracker by encoding parsed metadata in a 5-minute JWT, signing it with a shared secret, and deep-linking the user to Job Tracker with the token as a query param. No auth infrastructure, no service-to-service secrets, no added complexity. The user's browser carries the payload.
Resume Tailor has no database, no login, and no auth. It's a public tool, just paste and go. Job Tracker owns all the persistence and identity. This separation keeps Resume Tailor lean and lets Job Tracker's data model scale cleanly. Auth only where value is stored, not where computation lives.