Research Scientist- Strategic Awareness
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Strategic Awareness
The mission of our team is to harness technical insights on AI trajectories to inform governmental preparedness for the greatest challenges of transformative AI. To achieve this, we conduct deep dives on critical uncertainties regarding AI trajectories and impacts, track key indicators and warning signals, and produce briefs for key government decision makers. We also provide technical assistance on preparedness and policy development.
Role Summary
As Research Scientist, you will play a critical role in driving the research work of the Strategic Awareness team. This work involves conducting rapid deep dives on critical policy-relevant questions on AI trajectories, contributing to products to communicate these findings to key decision makers, and applying research findings to preparedness and policy development. You will be collaborating with experienced policy advisors and team members that specialise in communicating findings to key government decision makers.
We are looking for candidates with a “T-shaped” profile, with broad knowledge of frontier AI research and safety, and with deep expertise in a relevant topic area or set of methodologies. Depending on your expertise, you may lead work on AI capability trends and forecasts, or work on implications such as labour market impacts or loss of control risks. You may conduct literature reviews, leverage expert interviews, use forecasting methodologies, analyse data and model evaluation results, or conduct survey research.
Person Specification
To set you up for success, we are looking for the following skills, experience and attitudes, but not all need to be the case for all candidates. We are flexible in shaping the role to your background, expertise, and level of experience.
- Critical and strategic thinking ability – track record and proven ability to identify and prioritise most insightful approaches for answering underspecified questions.
- Experience in deeply examining AI trajectories towards transformative AI and associated impacts.
- Broad knowledge on frontier AI model development and safety, including frontier AI research and training, evaluations, scaling laws, compute stack, real world adoption, particular impacts or risks such as labour market impacts or loss of control risk, particular jurisdictions such as the non-Western AI ecosystem.
- Deep expertise in one or more key topic areas OR critical methodology for the team, including but not limited to:
-
- Topic areas: frontier AI research and training, evaluations, scaling laws, compute stack, real world adoption, particular impacts or risks such as labour market impacts or loss of control risk, geopolitical considerations, particular jurisdictions such as the non-Western AI ecosystem.
-
- Methodologies: forecasting, quantitative modelling, national security indicators.
- Broad range of empirical research methodology and data science skills, including quantitative forecasting/modelling AI progress; statistical analysis; building dashboards and data visualization.
- Proven track record of excellent writing skills, with ability to understand and communicate complex technical concepts for non-technical stakeholders and synthesize scientific results into compelling narratives.
- Affinity for scrappy and highly iterative work with fast feedback loops.
- Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas of mainly academic interest.
- Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team.
- Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
This post requires Security Clearance (SC), which requires at least 2 years of UK residency, and a willingness to undergo Developed Vetting (DV) if required. More detail on clearance eligibility can be found on the UK Government website.
National security vetting: clearance levels - GOV.UK
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Working for the Civil Service
Diversity and Inclusion
Apply for this job
*
indicates a required field