AI Safety Specialist (Technical)

Shape the future of AI safety research by designing the world’s most popular courses on AI safety.

Apply now

Who are we

We’re focused on helping people create a better future for humanity. We do this by designing and running courses on some of the world’s most pressing problems, and providing engaging and action-guiding experiences for individuals and organisations that want to make a positive difference.

BlueDot Impact was founded in August 2022 in Cambridge, UK, and grew out of a non-profit supporting students at the University of Cambridge to pursue high-impact careers. Our courses quickly gained traction, as many of the challenges facing students at the university were also faced by students and professionals worldwide. To learn more about our company’s story, check out this podcast interview with Dewi, one of our founding team members.

Thus far in 2023, we’ve supported over 1,000 people to learn about and contribute to AI safety and pandemic preparedness. During the first 6 months of 2024, we will:

  • Ship a new iteration of our courses each month to generate faster organisational learning and support more students (up from every 3-4 months in 2023);
  • Pilot new initiatives to increase the proportion of students taking impactful actions after graduating our courses; and
  • Build on our existing relationships with teams in the UK Government to support their AI Safety work, including the UK Office for AI and the UK’s AI Safety Institute.

Note: we recently updated this job title and description. If you’ve already applied for the “Course Designer (Technical)” role, you do not need to apply again to this role.

What you'll do

We’re looking for an AI safety specialist to own and redesign the world’s most popular course on AI safety. You’ll work with a small and ambitious team who are focused on building the world’s best learning experiences and supporting our students to have a significant positive impact with their careers.  

In the first 6 months, you will:

  • Work with top AI researchers and conduct expert interviews to determine the goals and narrative of the AI Alignment course;
  • Read cutting-edge research in AI safety, and prioritise which papers and arguments students should engage with;
  • Design a world-class learning experience for students on the AI Alignment course, applying insights from the research into effective learning; and
  • Collaborate with our community of facilitators to continuously improve the course based on user experience and feedback.

After that, you will: 

  • Research and determine the highest priorities for our graduate community’s further learning, such as deep dives into specific alignment agendas;
  • Build partnerships with top experts, government departments and AI companies to design more advanced courses;
  • Support the next generation of researchers on our courses to land impactful roles after they graduate; and
  • Help to scale our courses by identifying niche target audiences and contributing to marketing campaigns.

Richard Ngo initially designed the course in 2021, and we have a graduate community of over 2,000 individuals working in all the major AI companies, top universities and governments. The course is widely regarded as the go-to location to learn about AI Alignment, and the AI Safety Fundamentals website has over 10,000 unique visitors each month. Over the next year, we’ll grow this audience by scaling up our digital marketing campaigns, giving access to our course platform to local groups, and launching a new programme to scale up facilitation capacity. You’ll be responsible for ensuring that this growing audience has an excellent and informative experience learning about AI Alignment, which provides them with the foundational knowledge and motivation required to contribute to the field.

About you

We are looking for someone who is actively engaged with the AI safety field and is motivated to create excellent learning experiences that support the next generation of researchers. 

You might be a particularly good fit for this role if you have:

  • Written about or conducted research on topics related to AI safety, and enjoy communicating with large audiences.
  • A strong understanding of machine learning, or have the technical abilities to learn this quickly.
  • Created educational courses or learning experiences on any topic, especially using “active learning” techniques.
  • Participated in or facilitated discussions in one of the AI Safety Fundamentals courses and had opinions on how the course could be improved.
  • Established relationships with individuals across the AI Safety and Alignment ecosystem, and you feel excited to deepen those relationships.
  • Been a teacher or teaching assistant at university, and felt motivated to improve the quality of learning of your students.

We encourage speculative applications; we expect many strong candidates will not meet all of the criteria listed here.

We believe that a more diverse team leads to a healthier and happier workplace culture, better decision-making, and greater long-term impact. In this spirit, we especially encourage people from underrepresented groups to apply to this role, and to join us in our mission to help solve the world’s biggest problems.

Location and compensation

We’re based in London, and we accept applications from all countries. We can sponsor UK visas. We have a strong preference for individuals who can move to London, though we will consider remote-first for exceptional candidates.

Compensation is based on your experience and relevant industry benchmarks, and will likely fall within £60-90k.

Apply for this role

The application process consists of four stages:

  • Stage 1: Initial application (<20 minutes).
  • Stage 2: Work test (2 hours).
  • Stage 3: Interviews and work trial (~1 day).
  • Stage 4: Reference checks.

We’re evaluating candidates on a rolling basis and we encourage you to apply as soon as possible.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.