I Built My Own AI Job Search Tool — Here's What Actually Worked
Job hunting is miserable. Not because rejection is hard (it is), but because the process itself is broken. You're applying for multiple roles simultaneously, each requiring a slightly different version of you, a slightly different emphasis on your experience, a slightly different tone. You're copying and pasting from a Google Doc, tweaking cover letters manually, losing track of where you applied, forgetting what you said in the first screening call.
I was doing all of this across three completely different role tracks at once — Frontend/Full-Stack Engineering, Hybrid Product Engineer/Technical PM, and Pre-Sales/Solutions Engineering. Each track had its own CV. Each needed its own cover letter logic. Each interview required different stories from my career foregrounded.
I looked at the tools available. Generic cover letter generators that knew nothing about me. Application trackers that were glorified spreadsheets. Interview prep apps with stock questions and stock answers. None of them understood that the same experience — say, building the username-based transfer protocol at BantuPay — needs to be framed completely differently depending on whether you're talking to an engineering hiring manager or a solutions team lead.
So I built Tracklo.
What Tracklo Does
Tracklo has two modules.
Apply is an application tracker combined with an AI cover letter generator. You track every application — company, role, which CV track you used, current status, notes. Each application lives on a card that you can update as the process moves forward. Alongside the tracker, there's a cover letter generator. You select your track (Engineering, Hybrid, or Pre-Sales), paste the job description, and hit Generate. Claude writes a cover letter under 280 words, punchy, no filler, grounded in your actual experience.
Prep is an interview question bank with AI-powered practice. You build up a bank of questions filtered by category (Behavioural, Technical, Product Sense, Commercial) and track. Click any question to get a model answer in STAR format, generated by Claude and personalised to your background. Then there's a self-answer zone: you type your own answer, hit Get Feedback, and Claude scores you on Clarity, Specificity, and Impact — each out of five — and gives you two concrete improvements in under 150 words.
Both modules feed your scores and answers back to Supabase so you can review your practice history over time.
The Stack
- Frontend: React + Tailwind, scaffolded with Lovable
- Database: Supabase (free tier for personal use, Row-Level Security in place for future multi-user)
- AI: Anthropic Claude API (Sonnet)
- Hosting: Vercel
The architecture is SaaS-ready from day one. Adding auth, user profiles, and Stripe billing is additive — not a rewrite. That was a deliberate decision I made early, and it paid off.
The Thing That Actually Changed Everything
The original version used hardcoded CV summaries. For each track, I had a config object: a short blurb about my background and a list of keywords. Claude would use that as context when generating cover letters and model answers.
It worked. But the output was generic. Phrases like "product-minded engineer with experience across fintech and healthtech" kept appearing because that's what I'd written in the config. Claude was faithfully following the brief — the brief was just thin.
The fix was a CV Settings page. Each track has its own tab. You upload a PDF or DOCX, the text is extracted client-side using pdf-parse and mammoth, and you can review and edit before saving. The raw text goes to Supabase in a cv_contexts table — one row per track, upserted on save.
Now, when you generate a cover letter, the app fetches the CV text for that track first. If it exists, it injects the full text into the Claude system prompt:
*"You are a cover letter writer. The candidate's CV for this track is as follows: [cv_text]. Use specific details, achievements, and language from this CV. Do not invent experience not present in the CV."*
The difference was immediate. Claude started referencing the username-based transfer protocol at BantuPay by name. It cited the 15% to 2% error rate reduction. It mentioned the 87,000 users across 147 countries with the right framing for each track — as a technical achievement for engineering roles, as a product outcome for hybrid roles, as a commercial proof point for pre-sales.
The same principle applies to interview prep. Model answers now reference your actual career. Feedback is grounded in what you've actually done, not in a generic version of a product engineer.
What I Learned About Building with Lovable
Lovable is genuinely fast for React scaffolding when you treat it right. The key is treating each prompt as a single, focused unit of work. I had seven build steps, each with a ready-to-paste prompt. Step 1 was just the project setup and theme. Step 2 was the Supabase integration. I didn't combine them. Every time I tried to do too much in one prompt, something broke.
The other thing: save a version after every step that works. Lovable's version history is your Git. Use it.
API keys go in the env variables panel. Never hardcoded. Never.
What's Next
Tracklo is currently personal use only. The SaaS upgrade path is mapped out: Supabase Auth with email and Google OAuth, Row-Level Security on all tables, user profiles with custom CV track summaries, and Stripe billing with a free tier (limited AI generations) and a Pro tier at £9/month.
The biggest thing I'd change if I were starting again: move the Claude API calls to a Supabase Edge Function from day one. Calling the Anthropic API directly from the browser works fine for personal use, but it's not viable for a public product — your API key would be exposed. Edge functions solve this cleanly.
The Honest Takeaway
The best version of this product came from replacing abstractions with reality. Generic summaries produce generic output. When Claude has access to your actual CV — your real projects, your real metrics, your real language — the output is harder to ignore.
That's probably true of most AI-powered tools. The model isn't the differentiator. The quality of the context you give it is.
Build it for yourself first. Solve your own problem properly. The product instincts you develop in the process are worth more than any spec doc.