The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is pleased to launch a call for Catalyst Project proposals for projects on sociotechnical considerations in artificial intelligence (AI) safety. We invite applications led by researchers in the social sciences and humanities (SSH) whose work addresses the pressing social, ethical and governance considerations of AI safety.
The CAISI Research Program at CIFAR is independently leading Canadian, multidisciplinary research to tackle the safety challenges posed by advanced AI systems. As a core component of the government's broader Canadian AI Safety Institute, the program leverages Canada's robust AI research ecosystem to advance critical knowledge.
Catalyst Projects are novel and exceptionally creative research projects with the potential for broad impact in the field of AI safety. In this call for Projects, funding will be up to $70,000 per year for up to two years. By advancing research in this important and urgent area, we aim to build on Canada’s existing strengths in sociotechnical AI safety research and to bring together researchers from various disciplines and build a community of SSH researchers working on AI safety across Canada.
We understand AI safety to be a broad-ranging field that includes both technical and sociotechnical dimensions. Our emphasis is on the risk mitigation of advanced AI systems – leading-edge frontier models – and the social, political, ethical, legal and economic implications of such systems outlined in the final International AI Safety Report 2025 (released January 2025). We are particularly interested in projects that address the priority research areas for CAISI, which include synthetic content; AI alignment; advancing knowledge on the design, development, and deployment of safer AI systems; studying the properties of complex AI systems and their real-world impacts; and improving risk governance (assessment, assurance, oversight).
As AI systems grow in complexity, capability, and influence, ensuring their alignment with human values and norms, social institutions, and long-term societal goals becomes critical. While technical research plays a central role in developing safe and robust AI systems, SSH disciplines – particularly the qualitative, interpretive, and humanistic disciplines – offer essential insights into human behaviour, social systems, moral reasoning, institutional design, historical precedents, and public accountability — all of which are vital for designing and governing safe AI.
The inputs and outputs of advanced AI systems are increasingly moving into realms of complex cultural data, requiring contextual, interpretive judgment – the precise kinds of interpretive methodologies that humanists and social scientists are best poised to apply. Mitigating the risks of these systems also requires innovative approaches that acknowledge and address the fact that the technology is already embedded within and interacting with society.
SSH scholars have an important role to play in building better, safer AI systems – including AI architectures, benchmarks, evaluation frameworks, and training strategies – that take this cultural and social complexity into account and that have the potential to advance AI’s ability to solve challenges, enhance human potential, and bring about positive changes to society and humanity. SSH scholars also have a role to play in critically engaging the concepts, histories, and practices of safety.
Full details of the call for proposals can be found on the CIFAR website.
The application will launch on the portal on October 3, 2025 and close on November 14, 2025 (11:59, Anywhere on Earth). Please ensure to press SUBMIT.
We expect to release decisions in January 2026.
Eligibility
- Projects can be submitted by 1 or 2 Principal Investigators (PIs)
- PIs must have a faculty affiliation at a Canadian university
- The primary PI must have expertise in SSH, including but not limited to: sociology, political science, economics, history, philosophy/ethics, legal studies, science & technology studies, or media studies
Please contact ai@cifar.ca if you have any questions or require additional information.
AI Safety Catalyst Call
The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is pleased to launch a call for Catalyst Project proposals for projects on sociotechnical considerations in artificial intelligence (AI) safety. We invite applications led by researchers in the social sciences and humanities (SSH) whose work addresses the pressing social, ethical and governance considerations of AI safety.
The CAISI Research Program at CIFAR is independently leading Canadian, multidisciplinary research to tackle the safety challenges posed by advanced AI systems. As a core component of the government's broader Canadian AI Safety Institute, the program leverages Canada's robust AI research ecosystem to advance critical knowledge.
Catalyst Projects are novel and exceptionally creative research projects with the potential for broad impact in the field of AI safety. In this call for Projects, funding will be up to $70,000 per year for up to two years. By advancing research in this important and urgent area, we aim to build on Canada’s existing strengths in sociotechnical AI safety research and to bring together researchers from various disciplines and build a community of SSH researchers working on AI safety across Canada.
We understand AI safety to be a broad-ranging field that includes both technical and sociotechnical dimensions. Our emphasis is on the risk mitigation of advanced AI systems – leading-edge frontier models – and the social, political, ethical, legal and economic implications of such systems outlined in the final International AI Safety Report 2025 (released January 2025). We are particularly interested in projects that address the priority research areas for CAISI, which include synthetic content; AI alignment; advancing knowledge on the design, development, and deployment of safer AI systems; studying the properties of complex AI systems and their real-world impacts; and improving risk governance (assessment, assurance, oversight).
As AI systems grow in complexity, capability, and influence, ensuring their alignment with human values and norms, social institutions, and long-term societal goals becomes critical. While technical research plays a central role in developing safe and robust AI systems, SSH disciplines – particularly the qualitative, interpretive, and humanistic disciplines – offer essential insights into human behaviour, social systems, moral reasoning, institutional design, historical precedents, and public accountability — all of which are vital for designing and governing safe AI.
The inputs and outputs of advanced AI systems are increasingly moving into realms of complex cultural data, requiring contextual, interpretive judgment – the precise kinds of interpretive methodologies that humanists and social scientists are best poised to apply. Mitigating the risks of these systems also requires innovative approaches that acknowledge and address the fact that the technology is already embedded within and interacting with society.
SSH scholars have an important role to play in building better, safer AI systems – including AI architectures, benchmarks, evaluation frameworks, and training strategies – that take this cultural and social complexity into account and that have the potential to advance AI’s ability to solve challenges, enhance human potential, and bring about positive changes to society and humanity. SSH scholars also have a role to play in critically engaging the concepts, histories, and practices of safety.
Full details of the call for proposals can be found on the CIFAR website.
The application will launch on the portal on October 3, 2025 and close on November 14, 2025 (11:59, Anywhere on Earth). Please ensure to press SUBMIT.
We expect to release decisions in January 2026.
Eligibility
- Projects can be submitted by 1 or 2 Principal Investigators (PIs)
- PIs must have a faculty affiliation at a Canadian university
- The primary PI must have expertise in SSH, including but not limited to: sociology, political science, economics, history, philosophy/ethics, legal studies, science & technology studies, or media studies
Please contact ai@cifar.ca if you have any questions or require additional information.