Research Engineer - Societal Resilience
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Role Description
The AI Security Institute research unit is looking for exceptionally motivated and talented Research Engineers to work in the Societal Resilience team.
Societal Resilience
Societal Resilience is a multidisciplinary team that studies how advanced AI models can impact people and society, studies the prevalence and severity high-impact societal risks caused by frontier AI deployment, and develops mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks.
In this role, you’ll join a strongly collaborative technical research team to help design and develop technical research projects into societal risks. These can include analysis of usage data, designing sociotechnical audits and evaluations of AI-driven products and services, and gathering and curating datasets that help us monitor the exposure, severity, and vulnerability of different risks.
Person Specification
Successful candidates will work with our research scientists to design and run studies that answer important questions about the effects AI will have on society. For example, how are AI systems being adopted and used in different sectors of the economy? How can AI agents collude with each other in real world simulations? How might AI systems be used to bypass safeguards when used to commit fraud? Research engineers will support a range of research projects into societal resilience by providing specialised technical expertise, building data pipelines, and creating demos and simulations.
This is a multidisciplinary team, and we look for people with a diversity of backgrounds. We are especially excited about candidates with experience in one or more of these areas:
- Computational social science
- Machine learning
- Data Science, especially including Natural language Processing
- Experience building and maintaining complex data products
Required Skills and Experience
We select based on skills and experience regarding the following areas:
- Data Engineering: We’re interested in how AI is being used by society, and what negative impacts it’s having in real time. To do this properly will involve a lot of data collection, data cleaning, processing, and visualisation.
- Web scraping (Selenium, BeautifulSoup, etc), OCR, familiarity with Python’s requests library to fetch from APIs, crowd sourcing, social media monitoring, public datasets, web archives, etc
- Machine Learning Engineering: Extensive experience training machine learning models from scratch before; you’re familiar with neural network internals and know how and when to use them. In particular, we’re looking for engineers familiar with multi-modal and agentic systems.
- Experience designing high fidelity measurement and monitoring methods to assess particular risks or incidents.
- Knowledge about frontier models and how they are trained/used: experience of working on model or system evaluations and other AI safety projects
- Strong verbal communication, experience working on a collaborative research team, and interpersonal skills
- Demonstrable interest in the societal impacts of AI
Desired Skills and Experience
- Published work related to societal impacts of AI systems
- Experience conducting multimodal evaluations or training multimodal models.
- A specialisation in a particular field of social or political science, economics, cognitive science, criminology, security studies, AI safety, or another relevant field.
- Front-end software engineering skills to build UI for studies with human participants.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Working for the Civil Service
Diversity and Inclusion
Apply for this job
*
indicates a required field