Safeguards Technical Governance Researcher
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Role Description
As AI systems become more capable and integrated into the economy, they will increasingly be targeted by adversaries looking to take advantage of their weaknesses.
AISI’s Safeguards Analysis team works to understand, evaluate, and improve the technical measures to address these risks (“safeguards”). The team has previously performed pre-deployment tests of leading frontier models [1], and published new datasets [2] and attacks [3].
Our research enables us to develop insights on the safety of frontier systems, which are most impactful when shared.
As a government body, we are well positioned to go beyond technical output and perform research on technical governance relating to safeguards. In this vein, we have already published our Principles for Safeguard Evaluations, which lays out an approach for effective evaluation of frontier AI system safeguards.
There are many more pieces of technical governance research we think are important, such as:
- How can the outputs of safeguard evaluations be made actionable and decision-relevant?
- How can risk from fine-tuning API access be managed effectively?
- What are effective methods for differential or structured access to advanced AI capabilities?
Many of these directions have both a technical component and a governance component, and this role would involve working closely with those performing technical work (and potentially performing technical work yourself) to ensure it is focused on producing actionable insights from a governance perspective.
We’re excited for a technical governance researcher to drive forward this work. As these projects draw extensively from the team’s broader work and are high priority, this researcher will work closely with the workstream lead and other researchers to reflect the team’s prioritisation and experience.
Your responsibilities will include:
- Ideating, developing and drafting technical governance research on safeguards.
- Reading and understanding in detail relevant research from both within and outside AISI to ensure governance research output is technically informed and grounded.
- Delivering compelling and well-written documents that are received well by relevant stakeholders and audiences.
- Managing multiple stakeholders (both technical and non-technical) involved in producing the research outputs.
Person Specification
You may be a good fit if you have some of the following skills, experience and attitudes:
- Experience working on machine learning, AI, or AI security in industry, in academia, or independently.
- Strong research taste and experience, including ideating new research or technical governance products, executing independently and effectively on completing those produces and delivering them successfully to achieve high impact.
- Extremely strong written and verbal communication skills, both for technical and non-technical communication and audiences.
- Designing, delivering, and maintaining complex written products while managing diverse stakeholders and audiences
- Red-teaming experience against any sort of system.
- A broad understanding of large language models and the risks surrounding them.
- Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
- Bring your own voice and experience, with an eagerness to support your colleagues, prioritise the team’s success, and find new ways of getting things done within government.
- Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Working for the Civil Service
Diversity and Inclusion
Apply for this job
*
indicates a required field