Skip to Main Content
Skip to left navigation

CIFAR Portal

CAISI Research Program at CIFAR - Call for Solution Networks
Opens Feb 27 2025 09:00 AM (EST)
Deadline Apr 21 2025 11:59 PM (EDT)
Description

The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is pleased to launch a call for Expressions of Interest for Solution Networks in AI Safety. The purpose of this call for expressions of interest is to identify up to three multidisciplinary networks of experts, who will work together to develop solutions that could have a transformative impact on AI safety.

CIFAR Solution Networks are collaborative opportunities for multidisciplinary teams led by a core group of up to six individuals who together work to develop approaches and applications (solutions) to current sociotechnical challenges and opportunities in AI safety. The solutions could apply to sectors across society, including academia, government, industry, and civil society, and should focus on the development of defined prototypes, practices, and tools–both technical and governance–in AI safety. The solution should focus on mitigating the risks associated with advanced, leading-edge AI systems.

Proposals that do not focus on mitigating the safety risks of advanced AI systems at scale will not be considered.  Proposals whose primary goal is to drive new knowledge or understanding in order to advance a field of study or that focus solely on technical solutions (e.g. improving machine learning algorithms) will not be considered. 

For this call, we have two streams, each with a distinct topic or geographical focus. 

  • Stream A: Mitigating the Safety Risks of Synthetic Content 

  • Stream B: AI Safety in the Global South 

Eligibility

Each Solution Network should be made up of a multidisciplinary team and should: 

  • Be led by two Co-Directors and composed of up to four additional members. 

  • Include at least one team member with a faculty or staff affiliation at a National AI Institute (Amii, Mila, or Vector Institute) 

  • Consider equity, diversity and inclusion in the team composition (including gender representation) 

  • Team members may represent any sector (academia, government, industry), but should be included on the project based on their expertise and not as representatives of an organization.  CIFAR will not provide stipends to team members from government or industry.

  • Stream A: Mitigating the Safety Risks of Synthetic Content

    • Includes at least one team member with expertise in social sciences, humanities, or legal studies

    • Include at least one team member with technical expertise in AI 

    • International collaborators may be included, however the majority of the team should be based in Canada 

  • Stream B: AI Safety in the Global South

    • Have at least one member based in Canada, and at least one member based in the Global South (the setting where the solution will be applied)

    • Teams that include members from at least two countries in the Global South are preferred.


The full Call for Expressions of Interest can be found on the CIFAR website.

CAISI Research Program at CIFAR - Call for Solution Networks


The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is pleased to launch a call for Expressions of Interest for Solution Networks in AI Safety. The purpose of this call for expressions of interest is to identify up to three multidisciplinary networks of experts, who will work together to develop solutions that could have a transformative impact on AI safety.

CIFAR Solution Networks are collaborative opportunities for multidisciplinary teams led by a core group of up to six individuals who together work to develop approaches and applications (solutions) to current sociotechnical challenges and opportunities in AI safety. The solutions could apply to sectors across society, including academia, government, industry, and civil society, and should focus on the development of defined prototypes, practices, and tools–both technical and governance–in AI safety. The solution should focus on mitigating the risks associated with advanced, leading-edge AI systems.

Proposals that do not focus on mitigating the safety risks of advanced AI systems at scale will not be considered.  Proposals whose primary goal is to drive new knowledge or understanding in order to advance a field of study or that focus solely on technical solutions (e.g. improving machine learning algorithms) will not be considered. 

For this call, we have two streams, each with a distinct topic or geographical focus. 

  • Stream A: Mitigating the Safety Risks of Synthetic Content 

  • Stream B: AI Safety in the Global South 

Eligibility

Each Solution Network should be made up of a multidisciplinary team and should: 

  • Be led by two Co-Directors and composed of up to four additional members. 

  • Include at least one team member with a faculty or staff affiliation at a National AI Institute (Amii, Mila, or Vector Institute) 

  • Consider equity, diversity and inclusion in the team composition (including gender representation) 

  • Team members may represent any sector (academia, government, industry), but should be included on the project based on their expertise and not as representatives of an organization.  CIFAR will not provide stipends to team members from government or industry.

  • Stream A: Mitigating the Safety Risks of Synthetic Content

    • Includes at least one team member with expertise in social sciences, humanities, or legal studies

    • Include at least one team member with technical expertise in AI 

    • International collaborators may be included, however the majority of the team should be based in Canada 

  • Stream B: AI Safety in the Global South

    • Have at least one member based in Canada, and at least one member based in the Global South (the setting where the solution will be applied)

    • Teams that include members from at least two countries in the Global South are preferred.


The full Call for Expressions of Interest can be found on the CIFAR website.

Opens
Feb 27 2025 09:00 AM (EST)
Deadline
Apr 21 2025 11:59 PM (EDT)