AI Safety Forum Australia

Program

7-8 July 2026The University of Sydney, Sydney, Australia

The full program will be published closer to the event. In the meantime, see below for the format and topics we'll cover.

Forum Topics

Some Questions We'll Explore

Technical AI Safety

What are the central technical challenges in making AI systems safe, and where is progress being made?

AI Governance

How should AI systems be governed, and what can Australia learn from international approaches?

Australia's Role

What are Australia's strengths in AI safety, and how can they be brought to bear?

Risk Assessment

How do we identify and assess risks from AI systems when the science itself is still evolving?

Evaluations & Testing

How do we measure what AI systems can do, and what counts as a meaningful safety evaluation?

Cross-Sector Dialogue

How can researchers, policymakers, and practitioners actually work together on AI safety?

PROGRAM

Forum Format

Two days of keynotes, workshops, panels, and networking, with plenary and parallel sessions running side by side. Opening and closing sessions bring everyone together.

Presentations & Panels

Keynote presentations, invited talks, and panel discussions in the plenary room across both days

Workshops & Breakouts

Parallel workshop and breakout sessions running alongside the plenary program

Lightning Talks

Short, focused presentations on emerging research and ideas

Networking & Meetings

Structured networking sessions and the ability to book 1-on-1 meetings with other attendees

7–8 July 2026 • Full program will be published closer to the event

Be Part of the 2026 Program

We're building the program and inviting proposals for presentations, workshops, lightning talks, panels, and networking sessions.

Submit a Proposal
2024 EXAMPLE

2024 Program Schedule

Here's what the inaugural 2024 forum looked like. The 2026 program will follow a similar format.

Day 1: Keynote Day — Thursday 7 November 2024

08:00
Registration and free barista coffee cart
09:00
Opening of the Inaugural Australian AI Safety Forum
Helen Wilson, Deputy Secretary, Science and Technology, Commonwealth Department of Industry, Science, and Resources
09:05
Welcome to the Venue, Sydney Knowledge Hub
Rupal Ismin, Sydney Knowledge Hub
09:10
Welcome to AI Safety Forum
Liam Carroll, Gradient Institute / Timaeus
09:30
State of AI
Tiberio Caetano, Gradient Institute
10:00
State of Technical AI Safety
Daniel Murfet, University of Melbourne
10:30
State of AI Governance
Kimberlee Weatherall, University of Sydney
11:00
Morning tea
11:30
Keynote: Red-Teaming for Generative AI — Silver Bullet or Security Theater
Hoda Heidari, Carnegie Mellon University
12:15
Keynote: Accelerating AI Safety Talent
Ryan Kidd, MATS Research
13:00
Lunch
14:00
Keynote: Frontier AI Safety Governance — Open Questions
Seth Lazar, Australian National University
14:45
Keynote: ASI Safety via AIXI
Marcus Hutter, Australian National University
15:30
Afternoon tea
16:00
Panel Discussion
17:00
Networking and drinks
18:30
End of Day 1

Day 2: Forum Day — Friday 8 November 2024

08:30
Doors open
09:00
Introduction to Day 2
Liam Carroll, Gradient Institute / Timaeus
09:05
Workshop: The Interim International Scientific Report on the Safety of Advanced AI
Daniel Murfet, University of Melbourne
Explores the best current scientific understanding of AI safety including critical challenges and emerging approaches for making progress on the capabilities, risks, and technical approaches of increasingly capable general-purpose AI systems.
10:30
Morning tea
11:00
Workshop Track A: International Governance of AI Safety — A Role for Australia?
Johanna Weaver (Tech Policy Design Centre), Chelle Adamson (Dept. of Industry, Science & Resources)
Learn about the latest international processes, laws and proposals, explore their relevance to the Australian context, and discuss how Australia might participate in international governance for AI safety.
Workshop Track B: Unpacking "Safe" and "Responsible" AI
Qinghua Lu (CSIRO), Alexander Saeri (MIT FutureTech)
An overview of recent progress in the science of Responsible AI, with space to discuss cross-pollination between Responsible AI and AI Safety research and practice communities.
Workshop Track C: Perspectives on Generalisation in the Science of AI Safety
Daniel Murfet (University of Melbourne), Marcus Hutter (ANU)
Mathematical frameworks that illuminate AI behaviour, with implications for technical approaches to AI safety.
12:00
Workshop Track A: New Governance Proposals for Frontier AI Safety
Keynote: Atoosa Kasirzadeh (Carnegie Mellon University), Seth Lazar (ANU)
Covers responsible scaling policies, compute governance, open source development, governing autonomous agents, and pre-release testing. Participants collectively examine the political and social dimensions of frontier AI governance and explore trade-offs in governance strategies for highly capable AI systems.
Workshop Track B: Emerging Practice in Technical AI Safety
Soroush Pour (Harmony Intelligence), Ryan Kidd (MATS Research), Karl Berzins (FAR AI)
Explores concrete initiatives in technical AI safety through AI evaluations development, talent cultivation, and research acceleration, followed by open discussion.
13:00
Lunch
14:00
Workshop: What Could an Australian AI Safety Institute Look Like?
Keynote: Nitarshan Rajkumar (University of Cambridge), Greg Sadler (Good Ancestors Policy)
The UK, US, Japan and others have established AI Safety Institutes to research and support action on risks from AI. Participants discuss what an Australian AISI could do, how this could advance AI safety in Australia and internationally, and how such an Institute could operate.
15:50
Concluding Remarks
Liam Carroll, Gradient Institute / Timaeus
16:00
End of Day 2
2024 HIGHLIGHTS

Watch 2024 Sessions

Introduction to the Forum - Liam Carroll

State of AI - Tiberio Caetano

Technical AI Safety - Daniel Murfet

State of AI Governance - Kimberlee Weatherall

Red-Teaming for Generative AI - Hoda Heidari

Accelerating AI Safety Talent - Ryan Kidd

Frontier AI Safety Governance - Seth Lazar

ASI Safety via AIXI - Marcus Hutter