Artificial intelligence (AI) is now deeply embedded in education. Teachers use it to streamline lesson planning and grading, while students lean on it for brainstorming, research, and language support. But new research from Keeper Security warns that AI adoption is far outpacing governance and security, leaving schools vulnerable to disruptive—and sometimes harmful—AI misuse.
Keeper's survey of 1,460 administrators across the U.S. and U.K. found that 86% of schools allow students to use AI tools and 91% permit faculty use—mostly with guidelines in place.
According to the research, students are using AI primarily for research (62%), brainstorming (60%), and language support (49%). Faculty and staff focus on efficiency, with top uses in scheduling (67%) and lesson preparation (60%).
This divide reflects how institutions encourage faculty to drive productivity through AI while restricting students' use to exploratory activities to preserve academic integrity.
AI's integration hasn't come without consequence. Keeper found that 41% of schools have already reported AI-related cyber incidents, including phishing, misinformation campaigns, and harmful deepfake content created by students.
Thirty percent of institutions said students had produced harmful AI content, such as deepfake impersonations; 11% reported disruptions from AI-driven phishing or misinformation, while another 30% contained incidents quickly; 90% of respondents expressed concern over AI-related threats, with student privacy violations (65%), learning disruption (53%), and deepfake impersonation (52%) topping the list.
This aligns with Keeper's Future of Defense research, which found 95% of IT leaders believe cyberattacks are becoming more sophisticated than their defenses.
Anne Cutler, Cybersecurity Evangelist at Keeper Security, broke down the research in this way:
"Artificial Intelligence (AI) is already part of the classroom, but our recent research shows that most schools are relying on informal guidelines rather than formal policies. That leaves both students and faculty uncertain about how AI can safely be used to enhance learning and where it could create unintended risks."
"What we found is that the absence of policy is less about reluctance and more about being in catch-up mode. Schools are embracing AI use, but governance hasn't kept pace. Policies provide a necessary framework that balances innovation with accountability. That means setting expectations for how AI can support learning, ensuring sensitive information such as student records or intellectual property cannot be shared with external platforms and mandating transparency about when and how AI is used in coursework or research. Taken together, these steps preserve academic integrity and protect sensitive data."
"Policies on their own are only part of the answer. Our research also points to the need for awareness and training. Rules provide structure, but education ensures they can be applied in practice. When schools combine clear policies with practical support, AI becomes a constructive, trusted resource rather than a source of uncertainty."
"Additionally, our research found that while almost every education leader is concerned about AI-related threats, only one in four feels confident in identifying them. The challenge is not a lack of awareness, but the difficulty of knowing when AI crosses the line from helpful to harmful. The same tools that help a student brainstorm an essay can also be misused to create a convincing phishing message or even a deepfake of a classmate. Without visibility, schools struggle to separate legitimate use from activity that introduces risk."
"The findings highlight the importance of safeguards that give schools greater visibility and control. Enhancing account protection with multi-factor authentication, managing access to sensitive systems through privileged access controls and using monitoring tools to detect unusual activity all help administrators better understand what is happening across their networks. These measures also give educational institutions the confidence to encourage responsible AI use without adding unnecessary friction or slowing innovation."
"The media can play an important role in this education, as well. Deepfakes and AI-driven phishing are not future concerns—they are today's reality. By covering real examples and helping communities understand how to spot manipulated content, the media can raise everyday awareness. When that awareness is paired with the right security practices, schools are in a much stronger position to make AI a trusted part of education."
Despite high concern, preparedness remains inconsistent. Only 32% of institutions feel "very prepared" to handle AI-related threats over the next two years. Policy development is fragmented:
51% have detailed AI policies; 53% rely on informal guidance.
Less than 60% use AI detection tools or student education programs.
Just 37% maintain incident response plans specifically for AI-driven threats.
Technical safeguards are still limited. Only 26% of respondents believe current AI detection tools are "very reliable," underscoring the need for layered defenses and human vigilance.
There are implications for students, educators, and cybersecurity teams:
Opportunities: Enhanced learning through AI-driven research, creativity, and language support.
Risks: Exposure to misinformation, privacy violations, and the temptation to misuse AI for harmful content or academic dishonesty.
Opportunities: AI streamlines lesson prep, grading, and student engagement, freeing time for deeper teaching.
Risks: Faculty must be trained to detect AI-enabled scams and avoid over-reliance on flawed outputs.
Opportunities: Implement cutting-edge identity and threat detection solutions to safeguard digital learning environments.
Risks: Without proactive investments, universities face increasing attacks against SaaS platforms, student data, and institutional reputation.
As Keeper stresses, the road ahead requires formalizing policies, investing in identity and access controls (MFA, PAM), and deploying advanced detection tools.
For colleges, universities, and K–12 institutions, the next two years are pivotal. Schools must: close the policy gap with clear AI usage frameworks; strengthen resilience through staff training, student education, and technical defenses; and safeguard trust by prioritizing privacy and ethical AI use.
AI in education can be transformative—but only if governance and security keep pace. Otherwise, the promise of smarter classrooms may give way to perilous vulnerabilities.
Develop and enforce formal AI usage policies for both faculty and students, with clear guidelines on acceptable use, privacy, and consequences for misuse.
Strengthen identity and access management (IAM) by implementing multi-factor authentication (MFA), privileged access management (PAM), and regular credential audits to reduce AI-driven impersonation and account takeover risks.
Invest in AI detection and monitoring tools to identify deepfakes, harmful content, and anomalous behavior early—while recognizing these tools must be paired with human oversight.
Educate faculty, staff, and students on AI's opportunities and risks, including phishing awareness, misinformation resilience, and ethical use training.
Establish incident response playbooks for AI threats, covering scenarios like AI-generated phishing, deepfake impersonation, and student misuse, with defined escalation paths and containment procedures.
We asked cybersecurity vendor SMEs for their take on the research.
Kelvin Lim, Sr. Director, Head of Security Engineering, APAC, at Black Duck:
"Keeper Security's report shows that AI is changing education, however, it is also bringing new cybersecurity challenges."
What should education boards and schools do when using AI:
Secure the software supply chain - AI generated code often rely on open-source components. Software composition analysis (SCA) helps catch vulnerabilities in these open-source components before they are exploited.
Policy enforcement - Many schools lack formal policies for AI tool use. AI policies and security frameworks can help schools to enforce security controls for staff and students when using AI.
Data protection & privacy - AI tools may handle sensitive student data. Schools should implement strong data encryption and access management to prevent leaks or misuse.
Building a security mindset - Technology and policies alone are not sufficient. Training programs can help staff and students recognize risks and adopt good cybersecurity practices when using AI.
"AI can enhance the student's learning experience, but without proper cybersecurity, it creates risks. Proactive strategies are required to protect staff and students from cyberattacks, safeguard data, and ensure uncompromised trust in software for the increasingly regulated AI-powered world."
Alex Quilici, CEO at YouMail:
"The biggest cyber risk to schools is our kids. The reality is younger generations are the ones getting scammed the most. Gen Z, in particular, is impatient, naive, and easy to trick. Scam texts and calls bombard them every day, and they have not yet learned to pause and question what they are seeing."
"I always tell parents to protect their kids and educate them about these risks. One effective step is to have a family safe word that only your kids and you know. This can stop someone pretending to be your child from manipulating your family. Teaching kids to slow down and think before responding to messages is just as important."
"Scams are no longer just emails. They come in texts, calls, and even through AI-driven voice messages. Families need to stay alert, and schools can play a role by educating students about online safety. The more kids understand how scammers operate, the less likely they are to fall victim."