The police stopped a 13-year-old student after he asked the AI platform how to “kill” his friend in class.

According to Gizmodo, the incident took place in DeLand, Florida (USA), where an unnamed student attending South Western Middle School reportedly asked OpenAI’s ChatGPT chatbot how to “kill my friend in the middle of class”; this immediately triggered an alert within a system monitoring school-provided computers managed by a company called Gaggle, which offers safety services to school districts nationwide.

The police quickly began questioning the teenager, according to reports from local NBC affiliate WFLA, which stated the student told police he was teasing a friend who annoyed him.

The Volusia County Police Department commented: “Another prank caused an emergency on campus. Parents, please talk to your children so they do not repeat the same mistake.”

In a blog post, Gaggle describes how it uses web monitoring, filtering various keywords (with “kill” presumably one of them) to gain “clear visibility of browser usage, including conversations using AI tools like Google Gemini, ChatGPT, and other platforms.”

The company says its system is designed to report concerning behaviors related to self-harm, violence, bullying, and more, providing context through screenshots.

On its website, the company addresses student privacy as follows: “Most teachers and lawyers will tell you that when your child uses technology provided by the school, no privacy should be expected. In fact, your child’s school is legally required under federal law (Children’s Internet Protection Act) to protect children from accessing obscene or harmful content online.”