Back to jobs
New

AI QA Trainer – LLM Evaluation

World Wide - Remote

Are you an AI QA expert eager to shape the future of AI? Large-scale language models are evolving from clever chatbots into enterprise-grade platforms. With rigorous evaluation data, tomorrow’s AI can democratize world-class education, keep pace with cutting-edge research, and streamline workflows for teams everywhere. That quality begins with you—we need your expertise to harden model reasoning and reliability.

We’re looking for AI QA trainers who live and breathe model evaluation, LLM safety, prompt robustness, data quality assurance, multilingual and domain-specific testing, grounding verification, and compliance/readiness checks. You’ll challenge advanced language models on tasks like hallucination detection, factual consistency, prompt-injection and jailbreak resistance, bias/fairness audits, chain-of-reasoning reliability, tool-use correctness, retrieval-augmentation fidelity, and end-to-end workflow validation—documenting every failure mode so we can raise the bar.

On a typical day, you will converse with the model on real-world scenarios and evaluation prompts, verify factual accuracy and logical soundness, design and run test plans and regression suites, build clear rubrics and pass/fail criteria, capture reproducible error traces with root-cause hypotheses, and suggest improvements to prompt engineering, guardrails, and evaluation metrics (e.g., precision/recall, faithfulness, toxicity, and latency SLOs). You’ll also partner on adversarial red-teaming, automation (Python/SQL), and dashboarding to track quality deltas over time.

A bachelor’s, master’s, or PhD in computer science, data science, computational linguistics, statistics, or a related field is ideal; shipped QA for ML/AI systems, safety/red-team experience, test automation frameworks (e.g., PyTest), and hands-on work with LLM eval tooling (e.g., OpenAI Evals, RAG evaluators, W&B) signal fit. Skills that stand out include: evaluation rubric design, adversarial testing/red-teaming, regression testing at scale, bias/fairness auditing, grounding verification, prompt and system-prompt engineering, test automation (Python/SQL), and high-signal bug reporting. Clear, metacognitive communication—“showing your work”—is essential.

Ready to turn your QA expertise into the quality backbone for tomorrow’s AI? Apply today and start teaching the model that will teach the world.

We offer a pay range of $6-to- $65 per hour, with the exact rate determined after evaluating your experience, expertise, and geographic location. Final offer amounts may vary from the pay range listed above. As a contractor you’ll supply a secure computer and high-speed internet; company-sponsored benefits such as health insurance and PTO do not apply.

Employment type: Contract
Workplace type: Remote
Seniority level: Mid-Senior Level

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf


Education

Select...
Select...
Select...

Select...
Select...
Select...

Proficiency means you can effectively communicate complex ideas, including academic or cognitively challenging topics, both verbally and in writing. This includes the ability to participate in professional conversations, explain nuanced concepts, and understand detailed instructions in the selected language(s)

Please enter the number of hours you are generally available each week using numbers only.
Decimals are allowed. Do not include text or symbols.

Example: 40 or 32.5

Select...

Please enter your desired hourly rate in USD as a number only (no symbols, words, or ranges). Decimals are allowed. Do not include $, commas, or text.

Example: 8.50 (for $8.50/hour)

Please acknowledge that you have read and agree to our Privacy Policy. *

By clicking "Submit Application", you agree to Invisible's Privacy Policy.