1.2 Million Student AI Conversations in 80 Days: What We Learned From 1,300 School Districts in 39 States
How students across America are using AI tools, and what Securly's AI Transparency Solution reveals about those interactions
Presented by Securly
Executive Summary
Whether districts explicitly permit or not, a growing body of evidence suggests that K-12 students are already using AI tools on school devices. Despite widespread usage and many self-reported surveys, little is known about how exactly students are using AI.
New data from Securly collected from December 1, 2025, to February 20, 2026, provides unprecedented visibility into how students are engaging with AI. Although these data suggest that a growing number of students are using AI for legitimate educational purposes, they also raise serious safety and wellness concerns.
- Securly provided visibility into over 1.2M AI conversations on school issued devices.
- ChatGPT (OpenAI) was the most popular LLM among 1,312 districts across 39 states included in the analysis.
- About 80% of students' AI conversations were deemed “educationally appropriate” based on district administrators' set policies.
- Of the remaining 20% (241,519 conversations), at least one content deflection was triggered based on district policies, meaning the AI blocked or flagged the request as inappropriate.
- About 2% of AI prompts showed signs of self-harm, violence, or bullying.
Methodology & Terminology
In November 2025, Securly launched its AI Transparency Solution to provide K-12 school districts with insight into how students are engaging with AI on school issued devices. The following analysis is grounded in data from 1,312 school districts across 39 states that have enabled Securly's AI Transparency Dashboard, which monitors student use of AI chatbots and flags potential safety concerns such as self-harm, bullying, or violence based on the content of student prompts and conversations.
This analysis was conducted within strict data protection agreements. The data processed through our system contains no PII. It's already de-identified before we aggregate the data. This analysis is a byproduct of the data we collected and categorized in accordance with our districts' policies.
safetyOS™: Securly's overarching safety platform that includes AI monitoring, web filtering, and student wellness alerting capabilities.
AI Transparency Dashboard: Securly's tool that gives district administrators using Securly Filter visibility into how students are using AI chatbots on school devices, including which platform they are using and what type of content is being discussed.
Securly AI Chat: Securly's educationally-appropriate chatbot that districts may choose to use as the interface on top of a commercial LLM allowing the district more control over prompt responses.
Securly Aware: Securly's technology that analyzes students' online activities aligned to district policies to identify signs of cyberbullying, self-harm, and potential violence.
Content Deflection: When a student's AI prompt is blocked or redirected by the system because it falls outside the district's approved use policies.
Guardrails (Use Policies): The content policies and boundaries a district sets to define what topics of AI conversations are allowed on school issued devices. These are customizable by a school district in Securly AI Chat.
Flagged: When a student's AI conversation is identified by the Securly Aware's AI monitoring as potentially involving a safety or wellness concern (such as self-harm, bullying, or violence) that may need intervention.
Key Metrics
Scale of AI adoption across participating school districts
Most popular AI tools
Share of total student AI conversations by platform
*Securly AI Chat is Securly's own educationally-appropriate AI interface, which can be configured to use either OpenAI's (GPT-5.1) or Google's (gemini-2.5-pro) models.
**Copilot, MagicSchool, BriskTeaching, NotebookLM represent 9% combined.
Content deflection frequency
How district content policies shaped student AI interactions
Securly's AI Transparency Solution allows schools and districts to place guardrails on AI chat conversations.
About 80% of student AI conversations were appropriate and fell within established guardrails. The remaining 20% (241,519 conversations) triggered at least one content deflection based on district policies, meaning the chat's topic violated the school's policy, and the AI attempted to steer the conversation back to a safe topic.
Breakdown of deflected conversations
Example of a prompt that was deflected as schoolwork completion:
"Select the correct answer. Which of the following is not a principle of Puritanism?..."
Safety & wellness alerts
Real-time monitoring of student AI interactions for potential safety risks
Securly Aware is Securly's AI-powered student wellness monitoring solution that scans students' online activities to detect, in real-time, potential risks like cyberbullying, self-harm, and violence. Among schools and districts using Securly Aware as a component of Securly's AI Transparency Solution, a total of 24,174 student AI prompts were flagged during this 80-day period.
conversations flagged by schools using Securly Aware for safety or wellness concerns, totalling 2% of flagged conversations across all platforms.