Federal officials say they expressed disappointment to OpenAI representatives during a meeting in Ottawa concerning the company’s handling of warning signs linked to the Tumbler Ridge secondary school shooting. Artificial Intelligence Minister Evan Solomon said the discussion was requested following reports that the company did not promptly alert law enforcement about troubling online activity tied to the accused shooter.
According to Solomon, OpenAI officials did not present substantial new safety measures during the meeting but indicated they would return with more detailed proposals. He added that the company confirmed it is co operating with the RCMP as the investigation continues. Federal ministers stressed that credible warning signs connected to potential violence should be escalated quickly when public safety may be at risk.
The issue gained attention after reports indicated that the suspect’s account had been banned months before the Feb. 10 killings due to disturbing posts that included violent scenarios. OpenAI said the account was suspended in June but that the activity did not meet its internal threshold for notifying law enforcement because it did not appear to involve credible or imminent planning at the time.
Public Safety Minister Gary Anandasangaree and Culture Minister Marc Miller also attended the Ottawa meeting. Solomon said the discussion did not include specific details of the criminal investigation but focused on understanding how the company’s safety systems operate and how decisions about reporting are made. He said all options remain on the table regarding potential oversight of AI chatbots.
British Columbia Premier David Eby has called for a clear and transparent reporting threshold that would require AI companies to notify authorities when serious safety concerns arise. Legal experts note that while new legislation could mandate reporting, crafting such rules would be complex. Any framework would need to balance public safety with privacy protections and avoid overwhelming law enforcement with non credible reports.
The federal government has confirmed it is working on broader online harms legislation. Officials say future measures could include clearer responsibilities for digital platforms, though the specifics remain under development. For now, ministers say they are seeking systemic information to determine whether changes are necessary to help prevent similar tragedies in the future.

