Open IA announced on Friday that it would be overhauling its security protocols to improve collaboration with Canadian authorities so that tragedies such as those in Tumbler Ridge “can be avoided in the future.”
“At the request of ministers, we will establish direct points of contact with Canadian authorities to ensure that we quickly refer cases to them when we identify potential violence,” said Ann M. O’Leary, vice president of global policy at Open AI, the parent company of Chat GPT.
These commitments represent only the "first steps."
OpenAI is also committed to continuing to strengthen its process for making these referrals, drawing on the expertise of mental health and behavioral experts.
The support resources to which people who appear to be in distress are redirected by ChatGPT will also be better targeted to the community in which they live, O’Leary said. According to O’Leary, these commitments are only the “first steps” in improving the safety of its AI.In the coming months, OpenAI will engage with Ottawa and the provinces, our industry partners, and local stakeholders to ensure that we are collectively meeting the needs of Canadians as we continue to improve our safety models and policies.”
The AI Minister responds
Evan Solomon, Minister for Artificial Intelligence and Innovation, said on Friday that he wanted to shed light on “how digital platforms respond to the threat of extremism.”Innovation, said Friday that he wants to shed light on “how digital platforms respond when credible warning signs of violence emerge.” He is also scheduled to meet with OpenAI CEO Sam Altman next week. On Tuesday evening, ministers in Carney’s government blamed ChatGPT for its inadequate security measures, which were revealed after the Tumbler Ridge massacre.
They had just come out of a meeting with representatives from Open AI.
The police investigation is still ongoing.
“We discussed how imminent and credible risks are identified, how cases move from automated detection to human review, and how reports are handled, particularly when young people may be involved. human review, and how reports are handled, particularly when young people may be involved. We did not discuss the details of the case, as the police investigation is still ongoing,” Salomon said in a statement released on X following the meeting.
Violent scenarios shared on Chat GPT
This meeting followed revelations in the American daily newspaper The Wall Street Journal that theJesse Van Rootselaar, the perpetrator of the Tumbler Ridge shooting in British Columbia, had confided violent intentions to the chatbot last June, months before carrying out the act.
OpenAI only gave this information to the police after the fact. Jesse Van Rootselaar opened fire on February 10 at a high school in the small British Columbia community of Tumbler Ridge, killing eight people.
Rootselaar had created a second account.
Following the violent scenarios shared by the Chat GPT shooter over several days, OpenAI decided to suspend her account without notifying Canadian authorities of the situation.
In a letter sent to the Canadian government on Thursday, the American AI giant justified this decision by stating that they had ” identified a credible or imminent plan that met [their] threshold for referring the matter to law enforcement.” OpenAI reported on Friday that the shooter in question had also circumvented their ban on using ChatGPT by creating a second account. The company claims that it only discovered this after the RCMP announced Jesse Van Rootselaar’s name.
On Tuesday, the Canadian government indicated that it was open to tighter regulation of AI if measures were not taken to improve security protocols.