鶹ýվ

News

鶹ýվ team awarded CIFAR AI Safety Catalyst Grant to advance developer oversight in AI-assisted coding

Published: 19 January 2026
鶹ýվ team aims to develop guidelines, tools, and policy insights that help software engineers work safely and effectively with AI-assisted coding systems.

A 鶹ýվ research team is tackling one of AI’s fastest-moving challenges: how software developers can steer and safeguard code as AI systems become capable of writing large portions of software on their own.

The team is one of ten across Canada awarded funding through the , part of the new Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Each project receives $100,000 for one year, with support for up to two postdoctoral researchers. The funding was .

The 鶹ýվ project—“Maintaining Meaningful Control: Navigating Agency and Oversight in AI-Assisted Coding”—is led by , Associate Professor in the School of Computer Science, Canada CIFAR AI Chair, and Associate Scientific Co-Director at Mila; , Associate Professor in the School of Computer Science, co-director of the 鶹ýվ Software Technology Lab, and Associate Member of Mila; and postdoctoral researcher

“Developers struggle most with trust and verification”

AI-assisted coding systems are rapidly transforming software engineering. Today’s top models can solve more than 60% of well-scoped, real-world tasks in software-engineering such as bug fixes, according to the most recent benchmarks. Additionally, AI companies are actively working on developing more agentic AI systems to execute multi-step software development workflows with little to no human intervention.

As organizations adopt these tools, developers face increasing pressure to integrate them into their workflows. While the technology promises to improve efficiency and quality, it also introduces new risks.

“Developers struggle most with trust and verification of AI generated code,” said Professor Guo. “The generated code may look correct, but the developers aren’t confident about the reliability, correctness, or hidden security issues.” Low-quality code can also create time-consuming review processes, and many developers are unsure whether these tools ultimately boost productivity.

Despite the growing adoption of AI systems in software engineering and specifically in code generation, clear guidelines around how and what developers should oversee and when they should intervene are still lacking.

“The human-computer interaction [HCI] community has been investigating how these emerging technologies influence software engineering practices, examining issues such as trust, tool adoption, and workflow adaptation,” saidProfessorGuo. “However, research on what effective oversight looks like in AI-supported code generation remains underdeveloped.”


Research shaping safer AI-assisted coding

To close these gaps, the 鶹ýվ team aims to develop guidelines, tools, and policy recommendations that help software engineers work safely and effectively with AI-assisted coding systems.

The project will roll out over multiple phases, starting with identifying key patterns in how developers override, refine, or validate AI-generated code. Through interviews with developers in small and medium-sized companies, the researchers will map key decision points: when suggestions are accepted or rejected, how generated code is reviewed, and what prompts hands-on intervention as systems become more autonomous.

Building on these findings, the team will co-design an AI-assisted coding interface to support effective oversight when developers use AI to carry out substantial software engineering tasks. The interface will allow developers to set constraints and receive clear explanations of the AI’s reasoning, uncertainties, and alternatives. The system will adapt dynamically to developer input, creating a shared sense of intent between human and machine.

The team will test the interface with developers to evaluate its impact on workflow, confidence, and code quality. They will also experiment with features such as explainability mechanisms, critique prompts, and uncertainty indicators. Depending on the results, follow-up research could turn the findings into a fully designed interface.


Creating actionable guidelines

The project’s ultimate goal is to produce practical guidelines for ensuring that AI-generated software remains reliable and under meaningful human control. These recommendations will inform best practices for AI developers, software engineers, and policymakers.

“Effective oversight is highlighted in many regulatory approaches, including the EU AI Act, but what it looks like in practice is still unclear,” said Shalaleh RismaniPhD. “We think this project can help clarify what effective oversight looks like in real software engineering settings and inform both industry practices and policy discussions in Canada and internationally."

The project’s long-term impact may extend into software engineering education, so that students can learn about best practices and ethical considerations of using AI-based coding tools.

A multidisciplinary team tackling a national challenge

鶹ýվ’s position as a national leader in AI research makes it a natural home for this work. “鶹ýվ’s strong research communities in AI, software engineering, HCI, and AI ethics, and partnerships with institutes such as Mila and the Computational and Data Systems Institute (CDSI), provide an ideal environment for this research to take place,” saidProfessorCheung. “These interdisciplinary connections allow us to approach this project from different perspectives and support practical impact through collaborations with industry partners.”

The team emphasizes that CIFAR’s funding has also enabled a high level of collaboration. “The Catalyst Grant allowed us to bring together three researchers with very different but complementary backgrounds,” saidProfessorCheung. For a project like this, you really need methods and perspectives from multiple disciplines, and the grant made it possible for us to actually build that kind of team.”

Back to top