About eight months ago, OpenAI discovered an account linked to Van Rootselaar that had capabilities to identify the abuse of its AI models to promote violent behaviors. The business announced that it had blocked the account.
Without reporting her to the authorities, OpenAI reported and banned the suspect in one of Canada’s worst mass shootings for breaking ChatGPT’s usage guidelines in June of last year.
According to the artificial intelligence business, technologies that check for abuse, including the potential promotion of violent actions, discovered the accused killer’s account approximately eight months ago. His name was Jesse Van Rootselaar.
The 18-year-old allegedly killed eight people and injured roughly twenty-five others before killing herself earlier this month in the isolated western Canadian hamlet of Tumbler Ridge, according to Canadian authorities.
About eight months ago, OpenAI discovered an account linked to Van Rootselaar that had capabilities to identify the abuse of its AI models to promote violent behaviors. The business announced that it had blocked the account.
The Wall Street Journal was the first to report on OpenAI’s identification of Van Rootselaar. According to anonymous sources, the alleged murderer “described scenarios involving gun violence over the course of several days.” This led to an internal discussion among about a dozen employees, some of whom urged leaders to notify the police, according to the report.
According to OpenAI, it thought about reporting the account to law authorities at the time, but decided it did not fit the requirements as it could not find any credible or urgent planning. The business got in touch with Canadian police following the shooting.
“Everyone impacted by the Tumbler Ridge tragedy is in our thoughts,” an OpenAI representative wrote in an email. “We will continue to assist the Royal Canadian Mounted Police in their investigation after proactively contacting them with information on the individual and their usage of ChatGPT.”
The business claimed to train ChatGPT to deter impending harm in the real world.