Firecrawl
Research Engineer – Evals
ExperiencedHybridFull-time
Location
San Francisco, CA
Salary
$160k–$240k/yr
Experience
3+ years
Posted
Today
Job Description
RESEARCH ENGINEER — EVALS You'll build the evaluation systems that tell us whether Firecrawl actually works. That sounds simple. It isn't. Our core promise — convert any URL into clean, structured, LLM-ready data reliably — is hard to measure rigorously across millions of different websites, formats, and edge cases. As we layer in models and agent workflows, the question "did that work?" gets harder, not easier. This isn't an eval role where you inherit a framework and run benchmarks. You'll design the metrics, build the pipelines, generate the datasets, and own the feedback loop from output quality back to model and product decisions. If you care about what "good" actually means and have the engineering depth to measure it, this is the role. Location: San Francisco, CA (Hybrid) OR Remote (Americas, UTC-3 to UTC-10) Employment Type: Full time Department: Engineering Team Compensation: $160K – $240K • 0.01% – 0.10% Salary Range: $160,000 to $240,000/year (Range shown is for U.S.-based employees in San Francisco, CA. Compensation outside the U.S. is adjusted fairly based on your country's cost of living.) Equity Range: Up to 0.10% Location: San Francisco, CA or Remote (Americas, UTC-3 to UTC-10) Job Type: Full-Time Experience: 3+ years in ML engineering, applied AI, or data quality — with production systems Visa: US Citizenship/Visa required for SF; N/A for Remote ABOUT FIRECRAWL Firecrawl is the easiest way to extract data from the web. Developers use us to reliably convert URLs into LLM-ready markdown or structured data with a single API call. In just a year, we’ve hit millions in ARR and 50k+ GitHub stars by building the fastest way for developers to get LLM-ready data. Previously, we built Mendable, one of the first commercially available “chat with your data” applications. We sold to companies like MongoDB, Coinbase, Snapchat, and more. To do this, we spent a surprising amount of time building reliable infrastructure for getting clean data from the web. When we started to see our founding friends rebuilding the same thing, we thought we might be on to something. WHY FIRECRAWL - Technical ownership – Lead critical browser technology and infrastructure - Real impact – Directly shape how our browser stack drives our entire product - High velocity – Rapid iteration and deployment of your work - Small team, big ambition – Collaborate closely with founders, influencing key decisions and future directions WHAT YOU’LL DO Build the eval stack from scratch. Design and own the systems that measure whether Firecrawl's outputs are actually good — across scrape, crawl, extract, and map. That means defining metrics, building pipelines, curating datasets, and integrating evals into CI/CD so regressions get caught before they ship. You build the infra yourself because you're the one who needs it to work. Design benchmarks that reflect reality. Our outputs need to hold up across millions of websites — SPAs, paywalled content, dynamic rendering, structured and unstructured formats. You'll build benchmark datasets that cover the real distribution of what our customers send us, including the edge cases that break naive approaches. Ground truth doesn't come for free — you'll design the collection and labeling systems too. Own LLM-as-judge pipelines. You'll design and validate automated judges that score extraction quality at scale, know the failure modes of LLM-based evaluation, and build the human review tooling needed when automation isn't enough. You understand the difference between an eval that measures something real and one that just flatters the system. Close the loop with models and RL. Evals here aren't a reporting layer — they're a training signal. You'll work closely with the RL and Search/IR research engineers to turn quality measurements into reward signals and feedback loops that make models meaningfully better. Your benchmarks directly influence what gets trained next. Run fast experiments and communicate clearly. You design experiments that test meaningful hypotheses, run them quickly, and make decisions based on results. When you have findings, anyone on the team can understand what they mean — no decoder ring required. WHAT WE'RE LOOKING FOR Builds their own eval infrastructure. You don't wait for tooling to appear. You write the pipelines, curate the datasets, design the rubrics, and validate the judges yourself — because you understand that infra choices directly affect what you're actually measuring. You've run evals at scale and debugged the places where they lie. Knows what "good" means for unstructured web data. You've worked with messy, real-world data before. You understand why markdown quality is hard to define, why structured extraction fidelity varies by schema, and why naive string-match metrics miss the point. You have strong opinions about what a useful benchmark actually looks like — and the rigor to validate them. Fluent in LLM evaluation methodology. You understand LLM-as-judge systems, their correlation with human judgment, and where they break down. You've designed rubrics that hold up under adversarial inputs, built human review pipelines that scale, and know how to measure inter-rater agreement. You're not fooled by evals that only look good in aggregate. Production-minded. You care about whether your evals reflect real production behavior, not just offline benchmarks. You've worked on systems serving real traffic and made hard tradeoffs between evaluation depth, coverage, and cost. A benchmark that doesn't represent what customers actually send isn't a benchmark worth maintaining. Fast and clear. You'd rather run three rough experiments this week than one polished one next month. When you have results, anyone on the team can understand what they mean — and what to do next. Backgrounds that tend to do well: ML engineers who've built eval or data quality systems at AI labs or applied teams. Engineers who've worked on LLM fine-tuning or RLHF pipelines and understand how feedback quality drives model improvement. People who've worked at the intersection of data infrastructure and model development. Anyone who's been the person on the team asking "but how do we know this actually works?" WHAT WE'RE NOT LOOKING FOR Benchmark runners. If your eval experience is running existing frameworks on existing benchmarks and reporting numbers, this isn't the right fit. We need someone who builds the frameworks and defines the benchmarks. People who treat evals as an afterthought. If your default workflow is to build first and evaluate later — or to treat pass rates as a proxy for actual quality — you'll struggle here. Evals are a first-class product, not a QA gate. Researchers who need a platform team. If you expect pipelines, datasets, and labeling infrastructure to exist before you can be productive, you'll be frustrated. You build the tools you need. Slow iterators. If your standard experiment cycle is measured in weeks, not days, you'll struggle with the pace. We need someone who can design, run, and interpret a meaningful experiment within a day or two. BONUS POINTS - Any other niche expertise and skills - Previous experience at a scraping, automation, or security-focused startup - Ex-founder WHAT IT MEANS TO JOIN FIRECRAWL - High Leverage — Your processes directly amplify our growth. - Autonomy — Own your domain; we care about outcomes, not hours. - Remote-First Culture — Work at our new SF office, while collaborating with our remote team. - Growth Opportunity — Early equity and a role that scales with the company. - Creative Freedom — Experiment with new channels, formats, and automations. If it works, we run with it. BENEFITS & PERKS AVAILABLE TO ALL EMPLOYEES - Salary that makes sense — $140,000-180,000/year (U.S.-based), based on impact, not tenure - Own a piece — Up to 0.15% equity in what you're helping build - Unlimited PTO — Minimum 3 weeks off encouraged; take the time you need to recharge - Parental leave — 12 weeks fully paid, for moms and dads - Wellness stipend — $100/month for the gym, therapy, massages, or whatever keeps you human - Learning & Development - Expense up to $150/year toward anything that helps you grow professionally - Team offsites — A change of scenery, minus the trust falls - Sabbatical — 3 paid months off after 4 years, do something fun and new AVAILABLE TO US-BASED FULL-TIME EMPLOYEES - Full coverage, no red tape — Medical, dental, and vision (100% for employees, 50% for spouse/kids) — no weird loopholes, just care that works - Life & Disability insurance — Employer-paid short-term disability, long-term disability, and life insurance — coverage for life's curveballs - Supplemental options — Optional accident, critical illness, hospital indemnity, and voluntary life insurance for extra peace of mind - Doctegrity telehealth — Talk to a doctor from your couch - 401(k) plan — Retirement might be a ways off, but future-you will thank you - Pre-tax benefits — Access to FSAs and commuter benefits to help your wallet out a bit - Pet insurance — Because fur babies are family too AVAILABLE TO SF-BASED EMPLOYEES - SF HQ perks — Snacks, drinks, team lunches, and the occasional burst of chaotic startup energy INTERVIEW PROCESS 1. Application Review – Send us your stuff, and a quick note on why you're excited 2. Automated Assessment (~30 min) - We will do an initial automated assessment of your skills and knowledge. 3. Intro Chat (~25 min) – Quick alignment call with a member of our team 4. Technical Interview (~1 hr) – Tackle a small challenge 5. Interview with Founders (~30 min) – Culture, vision, and long-term fit 6. Paid Work Trial (1–2 weeks) – Work on something real with us 7. Decision – We move fast If you’ve ever wanted to own a product-critical system and build alongside founders, this is your moment. Apply now and let’s talk.