Research Assistant Residency in Human Influence
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Role Description
The Human Influence team at the AI Security Institute is looking for an exceptionally motivated and talented Research Assistant for a 6 month residency.
Human Influence
The Human Influence team studies when, why, and how frontier AI systems influence human attitudes and behaviour. The team’s mandate is to build a rigorous, world-class evidence base for the safe and responsible development of frontier AI by identifying risks, measuring impacts, and informing mitigation strategies related to Human Influence. This includes research on (for example): persuasion, manipulation, deception, advice-giving, theory of mind, anthropomorphism, sycophancy, and socioaffective human-AI relationships.
Our team includes top technical talent from academia and frontier AI companies. Our projects combine methods from computational social science, AI safety and security, cognitive science, behavioural science, computer science, machine learning, and data science. Many of our projects involve conducting careful and rigorous human-AI interaction experiments / randomised control trials.
As an example of our work, we recently completed the largest-ever study on the persuasive capabilities of conversational AI.
Person Specification
Successful candidates will work with our research scientists to design and run studies that answer these important questions. The role is particularly suitable for candidates with an interest in pursuing a research career, in academia or industry (e.g., recently graduated MSc students or early-stage PhD students). We encourage applications from candidates who are excited about this opportunity but who may not meet all of the stated criteria.
We are especially excited about candidates with experience in one or more of these areas:
- Computational social science
- Machine learning /AI
- Data Science, especially including Natural Llanguage Processing
- Human-Computer Interaction
- Psychology
- Cognitive Science
Required Skills and Experience
We select based on skills and experience regarding the following areas:
- Knowledge about frontier models and how they are trained/evaluated
- Experience planning and conducting human experiments
- Strong coding skills (in Python and/or R)
- Experience working on model or system evaluations and other AI safety projects
- Strong knowledge of advanced statistical modeling methods (e.g., multilevel/hierarchical/mixed regression models)
- Strong verbal and written communication, experience working on a collaborative research team, and interpersonal skills
- Demonstrable interest in the societal impacts of AI
Desired Skills and Experience
- Published work related to societal impacts of AI, evaluation of AI systems, or relevant work in a related field
- Experience evaluating or training multimodal AI models
- Experience fine-tuning language models,
- Experience with reinforcement learning (especially reinforcement learning from human feedback or reward modelling)
- Completed Bachelor degree and enrolled in a MSc/PhD (or equivalent years of industry experience) in AI safety, computer science, data science, social or political science, economics, cognitive science, criminology, security studies, or another relevant field.
- Front-end software engineering skills to build UI for studies with human participants.
Salary & Benefits
This role has a total Package £65,000, inclusive of a base salary £37,150 plus additional technical talent allowance of between £27,850.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
This post is open only to applicants who meet the Civil Service nationality requirements and have a willingness to pursue Security Clearance CTC/SC.
Civil Service recruitment: nationality rules - GOV.UK
Applications will close on the 8th September.
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Working for the Civil Service
Diversity and Inclusion
Create a Job Alert
Interested in building your career at AI Security Institute? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field