Who we are
We’re focused on helping people create a better future for humanity. We do this by designing and running courses on some of the world’s most pressing problems, and providing engaging and action-guiding experiences for individuals and organisations that want to make a positive difference.
BlueDot Impact was founded in August 2022 in Cambridge, UK, and grew out of a non-profit supporting students at the University of Cambridge to pursue high-impact careers. Our courses quickly gained traction, as many of the challenges facing students at the university were also faced by students and professionals worldwide. To learn more about our company’s story, check out this podcast interview with Dewi, one of our founding team members.
We currently run the world’s largest courses on AI Safety, with a graduate community of over 2,000 individuals working in all the major AI companies, top universities and governments. The course is widely regarded as the go-to location to learn about AI Alignment, and the AI Safety Fundamentals website has over 10,000 unique visitors each month. We are keen to use these capabilities to benefit the biosecurity field as we do AI Safety.
Over the last three months, we have run a successful pilot of the Biosecurity Fundamentals: Pandemics Course, with over 100 medical, public health and synthetic biology experts from across the world. We are excited for you to take this success to the next level in empowering thousands of people a build a pandemic-proof world in an age of rapidly advancing AI and synthetic biology capabilities.
During the first 6 months of 2024, we will:
- Ship a new iteration of our courses every month to generate faster organisational learning and support more students (up from every 3-4 months in 2023);
- Pilot new initiatives to increase the proportion of students taking impactful actions after graduating our courses; and
- Build on our existing relationships with teams in the UK Government to support their AI Safety work, including the UK Office for AI and the UK’s AI Safety Institute.