As cyber threats grow more frequent and sophisticated, organizations are turning to artificial intelligence as an integral part of their security strategy. AI and machine learning have become vital tools to detect never-before-seen attacks and respond to threats in real time.
While this represents an enormous leap in capability, it also poses potential risks such as data exposure, misinformation, and AI-enabled cyber attacks. As organizations adopt generative AI to enhance productivity, cybersecurity practices must evolve to safeguard these powerful technologies.
Effective AI governance has become imperative to balance the benefits and risks. But many businesses are implementing generative AI systems without sufficient oversight, according to a new global survey by cybersecurity vendor ExtraHop. The report reveals a concerning gap between the high adoption rates of generative AI and the lagging implementation of security policies, controls, and training.
The study of more than 1,200 IT and security leaders worldwide found that 73% reported at least occasional employee use of generative AI, showing rapid uptake. However, less than half of organizations have policies governing appropriate use of such technologies.
While 82% expressed confidence in defending against AI-related threats, fewer than 50% are monitoring employee usage, risking potential data leaks. Nearly a third have banned AI tools entirely, yet only 5% claim no employee usage, indicating bans are ineffective.
Moreover, top concerns centered on inaccurate responses rather than risks like data exposure or financial loss, suggest that leaders underestimate the security implications.
John Allen, Vice President of Cyber Risk & Compliance at Darktrace, shared his thoughts on generative AI with SecureWorld News:
"Because of the current and future risks posed by generative AI, I expect we will see data privacy regulations strengthened in the near future. Citizens care about privacy and will expect their representatives to enact laws and regulations to protect it. As an industry, in order to realize the anticipated value from AI, we need to work alongside governing bodies to help ensure a level of consistency and sensibility are present in potential laws and regulations.
The use of generative AI tools is ultimately still in its infancy and there are still many questions that need to be addressed to help ensure data privacy is respected and organizations can remain compliant. We all have a role to play in better understanding the potential risks and ensuring that the right guardrails and policies are put in place to protect privacy and keep data secure."
The report concluded that despite high adoption rates, basic security hygiene around generative AI is severely lacking. It advised companies to urgently prioritize training, monitoring capabilities, and data governance policies to ensure safe AI implementation.
With usage widespread but oversight limited, organizations must act quickly to implement governance frameworks around generative AI. Investing in visibility tools, security controls, and user policies will allow businesses to maximize benefits while mitigating risks.
Follow SecureWorld News for more stories related to cybersecurity.