Research Engineer/Research Scientist – Model Transparency
About the AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
The deadline for applying to this role is Sunday 24th May 2026, end of day, anywhere on Earth.
Team Description
The ability to effectively evaluate and monitor AI systems will grow in importance as models become more capable, autonomous, and integrated into society. If models can detect and game evaluations, obscure their reasoning, or behave differently under observation, the safety claims that governments and developers rely on become unreliable. Understanding and addressing these risks is essential to ensuring that oversight of advanced AI systems keeps pace with their capabilities.
The Model Transparency team is a research team within AISI focused on ensuring that evaluations, assessments, and monitoring of frontier AI systems remain reliable as models become less transparent. We research how and why oversight is declining – through phenomena such as evaluation awareness, unfaithful chain-of-thought reasoning, and changes in model architectures – and develop methods (including white and black box methods) to detect, measure, and mitigate potential issues. We share our findings with frontier AI companies (including Anthropic, OpenAI, DeepMind), UK government officials, and allied governments, and publicly to inform their deployment, research, and policy decisions. We also work directly with safety teams at frontier labs, contributing to safety case reviews and helping improve their alignment evaluation methodology.
Our recent work includes auditing games for sandbagging, reproducing natural emergent misalignment from reward hacking, and identifying open-weight language models that game propensity evaluations.
Role description
We're looking for Research Scientists and Research Engineers for the Model Transparency team with expertise in technical AI safety – such as interpretability, capability or alignment evaluations, model transparency – or with broader experience with frontier LLM research and development. An ideal candidate would have a strong track record of high-quality research in technical AI safety or adjacent fields.
- Research Scientists, drive the technical substance of our work – staying abreast of the literature, proposing and designing experiments, conducting rigorous analyses, and owning the evidence stack from experiment through to written output. They write, critique, and strengthen the team's reports and publications.
- Research Engineers, build the systems and tooling that make our research possible and fast – scaling experimental workflows, automating processes, solving infrastructure challenges, and creating systems that accelerate the entire team's output.
We're interested in candidates along the spectrum between Research Engineers and Research Scientists. The application form will ask you to indicate which role you lean towards.
The team is led by Joseph Bloom, advised by Geoffrey Irving. You'll work with talented, mission-driven technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and top universities. You may also collaborate with external research teams including those at frontier AI labs, METR, and FAR.
We are open to hires across a range of experience levels.
Representative Projects You Might Work On
- Developing a chain-of-thought monitorability benchmark and comparing monitorability properties across frontier AI systems, leveraging AISI’s unique access to reasoning traces from multiple labs.
- Designing and running experiments on open-weight models to study alignment and oversight-relevant phenomena – such as reproducing emergent misalignment from reward hacking, or red-teaming techniques like inoculation prompting and character training.
- Using white-box and interpretability methods – such as activation oracles, sparse auto-encoders or probes – to detect misalignment that isn’t visible through behavioural evaluation alone.
- Building tooling and infrastructure for our research – including agent orchestration, large-scale RL pipelines, mechanistic interpretability methodologies, and auditing agents.
The work could also involve:
- Reviewing frontier lab risk assessments and safety cases, providing independent analysis of alignment claims before deployment decisions.
- Conducting literature reviews and expert interviews to map the state of model transparency risks and inform AISI’s strategic priorities.
- Translating technical findings into actionable insights for AISI evaluation teams, UK government officials, and international partners.
What we’re looking for
If you’re unsure whether you meet the criteria below, we’d encourage you to apply anyway – we’d rather you erred on the side of applying than not.
Requirements for both roles:
- A get-things-done mindset – you take ownership, move fast, and care about shipping work that matters.
- A combination of self-sufficiency and enthusiasm for teamwork – you’re equally happy defining your own agenda and contributing to shared goals. You’re excited about growing, giving and receiving feedback, and building something together.
- An ability to build, supervise and orchestrate AI agents to complete tasks effectively, while verifying and maintaining quality of work.
- A demonstrated track record of relevant, high-quality work – whether technical publications, blog posts, or other publicly visible contributions.
Research Scientists – our requirements are:
- Hands-on research experience with large language models (LLMs) – such as evaluating or fine-tuning models, developing and testing monitors, or auditing models with white-box or black-box techniques.
- Ability and experience in writing research code for machine learning experiments, including experience with ML frameworks like PyTorch or evaluation frameworks like Inspect.
- An ability to write high-quality, concise research proposals that are well-motivated, tractable, and coherent.
- Good research taste – an ability to identify what’s important, choose productive directions, and avoid getting lost in dead ends.
- An ability to read research critically, identify flawed arguments, and poke holes in safety claims.
We don’t expect RS candidates to meet all of the following, but they are useful signal:
- Experience designing and running alignment evaluations or working on model transparency research.
- Experience with interpretability or white-box methods – such as mechanistic interpretability, sparse autoencoders, probing, or activation analysis.
- Familiarity with alignment literature, current methods for post-training and aligning LLMs, and the current state of the field.
- Prior mentorship or training within technical AI safety – such as through the MATS program or similar.
Research Engineers – our requirements are:
- Strong software engineering skills and experience building systems that support ML research – infrastructure, pipelines, tooling, or experimental platforms.
- Ability and experience writing production-quality code in Python and familiarity with ML frameworks like PyTorch.
- Experience working with LLMs at scale in some capacity – fine-tuning, deploying, evaluating, or building scaffolds around them.
- An understanding of the needs of research scientists, experience working within and supporting a research team or building tools to support research.
We don’t expect RE candidates to meet all of the following, but they are useful signal:
- A track record of scaling AI automation – getting agents to do useful work, building orchestration systems, or accelerating research workflows with AI tooling.
- Experience working with very large models (~100B+) at scale, including post-training (RL, RLHF, DPO), fine-tuning pipelines, or distributed interpretability work on models that don’t fit into memory.
- Experience with mechanistic interpretability tooling or white-box analysis infrastructure at scale.
- Strong open-source contributions, particularly related to LLMs or AI safety.
- Proficient usage of LLM coding tools and agents.
Selection process
Candidates should expect to go through some or all of the following stages:
- CV / application review
- Screening call
- Technical assessment
- Research interview
- Behavioural interview
- Final interview with members of the senior leadership team
What We Offer
Impact you couldn't have anywhere else
- Incredibly talented, mission-driven and supportive colleagues.
- Direct influence on how frontier AI is governed and deployed globally.
- Work with the Prime Minister’s AI Advisor and leading AI companies.
- Opportunity to shape the first & best-resourced public-interest research team focused on AI security.
Resources & access
- Pre-release access to multiple frontier models and ample compute.
- Extensive operational support so you can focus on research and ship quickly.
- Work with experts across national security, policy, AI research and adjacent sciences.
Growth & autonomy
- If you’re talented and driven, you’ll own important problems early.
- 5 days off and annual stipends for learning and development, and funding for conferences and external collaborations.
- Freedom to pursue research bets without product pressure.
- Opportunities to publish and collaborate externally.
Life & family*
- Modern central London office (cafes, food court, gym), or where applicable, option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
- Hybrid working, flexibility for occasional remote work abroad and stipends for work-from-home equipment.
- At least 25 days’ annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering.
- Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
- On top of your salary, we contribute 28.97% of your base salary to your pension.
- Discounts and benefits for cycling to work, donations and retail/gyms.
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
- Level 3: £65,000–£75,000 (Base £37,150 + Technical Allowance £27,850–£37,850)
- Level 4: £85,000–£95,000 (Base £44,195 + Technical Allowance £40,805–£50,805)
- Level 5: £105,000–£115,000 (Base £58,040 + Technical Allowance £46,960–£56,960)
- Level 6: £125,000–£135,000 (Base £71,525 + Technical Allowance £53,475–£63,475)
- Level 7: £145,000 (Base £71,525 + Technical Allowance £73,475)
Additional Information
Use of AI in Applications
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Working for the Civil Service
Diversity and Inclusion
Create a Job Alert
Interested in building your career at AI Security Institute? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
