B.C. Says OpenAI Did Not Raise Concerns Day After Tumbler Ridge School Shooting

Weekly Voice editorial staff
2 Min Read

The British Columbia government says OpenAI did not raise any concerns about the Tumbler Ridge school shooter’s online interactions during a meeting with provincial officials held the day after the deadly attack.

According to the province, the artificial intelligence company had a preplanned meeting with government representatives on Feb. 11, one day after the shooting at Tumbler Ridge Secondary School. Officials say there was no mention during that meeting of troubling exchanges between the suspect and the company’s ChatGPT platform.

- Advertisement -

On Feb. 10, Jesse Van Rootselaar shot and killed eight people, including six at the secondary school, before taking her own life. The tragedy has shaken the small northeastern British Columbia community and prompted ongoing investigations by the Royal Canadian Mounted Police.

A report from The Wall Street Journal alleges that OpenAI employees had previously discussed whether authorities should be alerted about concerning interactions the suspect had with ChatGPT months before the attack. The report says posts involving gun violence scenarios were flagged by OpenAI’s automated review system last June.

Premier David Eby said in a statement that any suggestion OpenAI possessed relevant intelligence before the shooting is profoundly disturbing for victims’ families and for British Columbians more broadly. He added that law enforcement is seeking orders to preserve potential evidence held by digital services companies, including social media platforms and artificial intelligence firms.

- Advertisement -

The RCMP has stated that OpenAI contacted police after the shooting occurred. Investigators say digital and physical evidence is being collected, prioritized, and methodically processed as part of the ongoing probe.

The case raises broader questions about how technology companies monitor and respond to harmful content generated or discussed on AI platforms, particularly when automated systems flag potentially dangerous material. Authorities have not indicated whether any earlier intervention could have altered the outcome, and the investigation remains active.

Share This Article