Cambridge AI Safety Hub

Working together to address risks from advanced artificial intelligence.

We are a network of students and professionals in Cambridge (UK) working on AI safety - the problem of ensuring that AI systems don’t cause catastrophic failures as they become increasingly powerful and complex.

We do this by conducting technical and policy research, running educational and research programs for people to learn the skills to contribute themselves, and creating a vibrant community of people with shared interests.

Community.

We’re building a community of people in Cambridge brought together by the goal of making AI development go well.

Education.

We run courses to bring people up to speed with the state-of-the-art in AI safety. Some have a more conceptual focus, others involve more hands-on programming. We’re continuing to expand the number of courses we’re offering.

Research.

Our members both conduct research themselves, and provide mentorship to others. Our educational courses provide a pipeline for joining our research efforts.

Getting started with CAISH.

If you’re new to AI safety, the best way to start learning is to apply for our AI Safety Fundamentals Fellowship, a 6-week reading group and workshop series designed to bring you up to speed with the state-of-the-art in AI safety research. The fellowship has a Technical Track and an AI Governance Track, depending on your background and interests.

01. Apply for our Intro fellowship.

For those who have done the Intro fellowship, and are looking to go further, our ML Bootcamp is designed to equip you with the skills to start doing safety research during your degree. Note: there will be no ML Bootcamp in Michaelmas 2024. Details for future bootcamps will be released at a later date.

02. Apply for our ML Bootcamp.

03. Join an event.

We host socials at our office during terms. Come and chat to our members to learn about AI safety and how you can get involved!

04. Join the leadership team.

If you want to be part of our leadership and help shape CAISH, contact hello@cambridgeaisafety.org or gabor@cambridgeaisafety.org and we will be in touch!

Apply now to get involved…

CAISH Fast-Track
Early Applications

Due February 14th, 11:59pm

Transformative AI is coming.
What will you do?

The capabilities of today’s AI systems are rapidly advancing—matching or surpassing human experts in coding (IOI Gold), mathematics (IMO Silver), and a broad range of scientific disciplines (OpenAI o3 on GPQA Diamond) on well-defined tasks. If progress continues at its current pace, AI could become the most transformative technology in history, reshaping the world in ways we can scarcely imagine—potentially within just years

To navigate this transition safely, the world needs talented, driven individuals to contribute—whether through fundamental research in AI labs, expanding theoretical understanding in academia, or shaping policy in think tanks and government.

CAISH is excited to introduce the AI Safety Fast-Track, an intensive 6-week program that equips participants with the knowledge and tools to understand and contribute to AI safety. Whether your background is technical or policy-focused, you'll learn from leading experts in the field through workshops, paper discussions, and hands-on projects, bringing your knowledge of AI safety from 0 to 1 (binary pun intended).

Get updated with our news by signing up for our mailing list: