Criminal Misuse Research Scientist/ Research Engineer
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Criminal Misuse
The Criminal Misuse team seeks to understand how highly capable AI systems can be misused by criminals or adversaries wishing to cause disruption to society. The team conducts research on how cutting-edge AI capabilities could be misused for fraud, scams, influence operations, or radicalisation. The team builds threat models to establish risk thresholds informed by policy, builds demos to track these thresholds, and designs technical evaluations to build evidence. We partner with government and labs to develop downstream mitigations, such as improving refusal policies.
Criminal Misuse is a strongly collaborative research team, led by the Societal Impacts Research Director, Professor Christopher Summerfield. Within this role, you will also have the opportunity to regularly interact with our highly talented and experienced staff across the Institute (including alumni from Anthropic, DeepMind, OpenAI, and ML professors from the Universities of Oxford and Cambridge), as well as with other partners from across government.
Person Specification
We are seeking a talented and ambitious Research Scientist/Engineer who excited about developing and implementing our research vision for this team. The successful candidate will have strong technical skills, including experience with Frontier AI systems, a vision for conducting technical research with direct policy impact, and a demonstrable interest in mitigating AI misuse.
In this role, you will need to work effectively within a team and will be expected to contribute to broader discussions about goals and strategy. However, you will be expected to be self-driven, to champion your own projects, to define the most important questions to answer, and to design and implement those experiments with the support of software engineers, data scientists and delivery staff. The Research Scientist/Engineer should be prepared not only to lead research projects, but also to present them in a compelling way to decision-makers so that they create real impact. We would be particularly excited to hear from people who have the following skills and experience:
Required Skills and Experience
- Relevant machine learning research experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security
- Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas.
- Comprehensive understanding of large language models (e.g. GPT-4). This includes a broad understanding of the literature. Hands-on experience with things like pretraining, finetuning and building evaluations for large language models is a plus.
- Broad knowledge of technical safety methods (T-shaped: some deep knowledge, lots of shallow knowledge)
- Strong research experience in Computer Science, AI, or a related field (e.g. PhD), including in experimental design and research problem selection
- Excellent writing and verbal communication skills across technical and non-technical audiences
- Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
- Have a sense of mission, urgency, and responsibility for success, demonstrating problem solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
- A collaborative approach to work, and experience of multi-disciplinary teamwork
- Interest in the misuse of AI systems
Desired Skills and Experience
- Research or technical experience with multimodal AI – in particular, audio and video
- A background in research relating to the (criminal) misuse of AI systems
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Working for the Civil Service
Diversity and Inclusion
Apply for this job
*
indicates a required field