![](https://images.squarespace-cdn.com/content/v1/630facf1f97fe818fd398a15/807851af-0682-4c66-a651-4b34883d7249/IMG_0623.png)
AI Safety Fast Track
Transformative AI is coming. What will you do?
The capabilities of today’s AI systems are rapidly advancing—matching or surpassing human experts in coding (IOI Gold), mathematics (IMO Silver), and a broad range of scientific disciplines (OpenAI o3 on GPQA Diamond) on well-defined tasks. If progress continues at its current pace, AI could become the most transformative technology in history, reshaping the world in ways we can scarcely imagine—potentially within just years.
To navigate this transition safely, the world needs talented, driven individuals to contribute—whether through fundamental research in AI labs, expanding theoretical understanding in academia, or shaping policy in think tanks and government.
CAISH is excited to introduce the AI Safety Fast-Track, an intensive 6-week program that equips participants with the knowledge and tools to understand and contribute to AI safety. Whether your background is technical or policy-focused, you'll learn from leading experts in the field through workshops, paper discussions, and hands-on projects, bringing your knowledge of AI safety from 0 to 1 (binary pun intended).
The program is designed for participants from a wide range of skill levels and areas of expertise. Background in AI is not necessary to participate, but it is an advantage for those applying to the technical track.
Lecture and workshop content will expose core questions in AI Safety, such as:
When might we develop AI that outstrips human abilities in all relevant domains?
How can we prepare for AI-enabled R&D?
Will sanctions on the semi-conductor industry exasperate or mitigate AI race-dynamics between major world powers?
How can we reverse engineer neural networks?
How could formal verification methods aid safer development of AI?
And much more.
The program is designed for students and professionals of various backgrounds, with the option for participants to specialise in either the technical track or the goverance & policy track.
Why Join?
Participants will get to learn about research and developments at the cutting-edge of AI safety, led by experienced facilitators and professionals in industry and academia. Some benefits of the program:
Flexible Learning Paths: Participants attend the workshops and sessions that are most relevant to their interests and skills.
Expert-Led Sessions: Learn from professionals at leading AI safety organizations and universities.
Community & Network: Join CAISH’s network of committed individuals working toward safer AI, with alumni working at places such as Anthropic, Google Deepmind, the UK AI Safety Institute, and other AI safety organisations.
Apply for early acceptance here by February 14th.
Please note: applications will be considered on a rolling basis
until March 7th, or until the programme is full.
Program Format
The program consists of two stages:
Stage 1: Context Loading
1-3 facilitator-led workshops to introduce core AI Safety concepts and motivation.
Stage 2: Inference Time
A variety of wokshops and lectures on various aspects of AI Safety, led by a mix of professionals and academics. The technical prerequisite knowledge for these workshops ranges heavily, as some are programming-based (python), while others are policy-oriented. Participants choose which of workshops to attend according to their preferred focus. In order to complete the program, participants must attend at least 4.
Context Loading, in AI, refers to how much relevant information or ‘context’ an AI model can consider at once when processing input or generating responses.
Inference refers to the process where a trained model makes predictions or decisions based on new input data, using patterns and relationships it learned during training. It's like how humans apply learned knowledge to new situations.
~ Claude 3.5 Sonnet
Program Timeline
February 14, 23:59: Fast-Track Early Applications close.
February 2025: Introductory Workshops (“Context Loading”)
March 7th, 2025: Fast-Track General Applications close.
March 2025: Fast-Track Program Workshops (“Context Loading” and “Inference Time”)
Note: Depending on participants performance during the introductory workshops, there is an additional mid-programme filtering stage that takes place following the introductory workshops. Before a participant can attend the in-depth workshops, they must attend the introductory workshops and satisfactorily complete the relevant assignments for the intro workshops. Hence, not all “Context Loading” participants will be able to continue to the “Inference Time” stage of the program.
Previous workshop leads & CAISH lecturers have been from:
FAQs
-
No.
-
Workshops are all hosted in-person in Cambridge, UK.
We expect almost all applicants to already be based in either Cambridge or London. For London-based participants, we may be able to cover travel expenses from London to Cambridge if it would otherwise prohibit you from joining the programme.
-
We will accept applications from students, working professionals, and academics who want to expand their knowledge of AI Safety.
Applicants may be from techincal backgrounds (e.g. STEM students, professional SWEs, ML engineers, etc) as well as policy/governance backgrounds (e.g. civil servants, international studies student, public policy students, etc)
If you have a unique background but think the programme will benefit you, please err on the side of applying!
-
No.
For technical track applicants, some experience with machine learning or AI is a plus, but not required.
-
Depending on the time of day, workshops will include either dinner or snacks for all participants attending.
-
We expect participants to commit roughly 3-5 hours of time per week to the Fast-Track program, split between attending workshops, pre-readings & assignments, and social events.
-
Please contact harrison@cambridgeaisafety.org and gabor@cambridgeaisafety.org.
![](https://images.squarespace-cdn.com/content/v1/630facf1f97fe818fd398a15/00eb1eff-794b-444a-b39b-2530ca893dd6/IMG_0615.png)